Statistics & Actuarial 2018 - 2019

Statistics & Actuarial 2018 - 2019



Statistics Department Seminar

Friday 12th April - 2 to 3pm WGB G09


Rational and robust agents: from decision/market/game theory to quantum theory and AI.


  1. AlessioBenavoli, Senior Lecturer at CSIS, University of Limerick


Abstract: Artificial intelligence (AI) research is revolutionising our lives and leading us to a world with self-driving cars, automated trading on stock exchanges etc.. Such applications require AI methods to be able to make rational choices and make robust decisions. Rationality means that an AI agent is assumed to take account of available information and uncertainty, potential

costs and benefits, and to act consistently (logically) in choosing the best action. Robustness in decision making is required to both the known unknowns (the uncertainty in the world about which the agent can reason explicitly) and the unknown unknowns (unmodelled aspects). However, the current methods are not sufficiently suited to address these issues. In this seminar, I will discuss some preliminary ideas on developing, from existing research in decision/market/game theory, a new robust framework for rational decision making. 


Assessing rationality in infinite spaces is a difficult task: the problem is in general either undecidable or NP-hard. The second part of the seminar will focus on presenting a computable theory of rationality (also called bounded rationality). I will then show that "any" theory of bounded rationality presents entanglement phenomena. The last part of the seminar will focus on how this impacts  on quantum theory foundation. Additional info about my research at

Statistics Department Seminar


Speaker: Prof. Andrew Smith, UCD 


Background of speaker

Andrew is an internationally renowned actuary who specialises in the application of advanced mathematical and statistical methods to solve problems in the financial services industry. Prior to joining UCD, Andrew was a partner at Deloitte from 2001 to 2017.


Summary of talk

Estimating the distribution of insurance claim amounts is required for many purposes, including the assessment of capital adequacy with regard to high distribution percentiles. The most common methodology is to fit a two-parameter exponential family of positive distributions. For this purpose, the method of maximum likelihood is known to be the most efficient, although other techniques such as the method of moments may be chosen for ease of implementation.


Our work looks at the situation when the loss data has come from an unknown member of a list of exponential families. In this case, the methodology includes selecting from the list of families as well estimating the parameters. Maximum likelihood can perform poorly for estimating high percentiles, if the model family is misspecified. The method of moments is generally more robust, in a mini max sense.


Thursday 21st March 2019, 4-5pm in WGB 402


Dr. Amirhossein Jalali


Conway Institute of Biomolecular and Biomedical Research, UCD


Translational Statistics: an application in prostate cancer diagnosis



Prostate cancer (PCa) represents a significant healthcare problem due to the dilemmas associated with its detection and treatment. The accurate risk stratification of patients before a biopsy can allow for individualised risk stratification thus improving clinical decision making. 


In this presentation, I will describe my current research on the risk stratification of patients for prostate cancer diagnosis. My focus would be more on the post-analysis stage and the importance of the model communication, where I will present interactive tools to facilitate the model translation.

Thursday, 21th February,  12-1pm

Western Gateway Building G18


Statistics Department Seminar

Small Population Bias and Sampling Effects in Stochastic Mortality Modelling



  1. Liang Itachi
  2. To what extent Bayesian methods reduce bias in the model volatility by using full EW population as a benchmark;
  3. The influence of informative prior distribution on the joint posterior distribution and to what extent such impact could be affected by a small population;
  4. How the above two bullets affect the mortality projection;
  5. The influence of the sampling variation on the posterior distributions of the model parameters and the forecasted mortality rates (posterior predictive distribution).
  6. The financial implication of the Bayesian methods by calculating annuities and longevity risk.




 This talk considers the impact of sampling variation on the calibration of stochastic mortality models. Random variation in deaths counts results in parameter uncertainty in estimates of age, period and cohort effect in the model. In turn this has an impact on time series parameter estimates.


With small populations, sampling variation causes an upwards bias in the estimated volatility of period effects using standard maximum likelihood methods. We seek to counteract this problem of bias using Bayesian inference.

We use England and Wales (EW) males as a benchmark and then scale this down to simulate small population with bootstrap methodology. We will discuss:

We find that the Bayesian methods generate improved estimation for the volatility of small population. The influence of the informative prior distribution (e.g. ARIMA likelihood) becomes stronger for smaller populations and restricts the latent parameter estimation to be more like the proposed time series model. There are shifting effects observed for the influence of the sampling variation on the posterior distribution of the model parameters as well as the posterior predictive distribution of the projected mortality rate.

 Thursday 7th February 12-1, Brookfield Health Sciences- 104-All are Welcome


Statistics Department Seminar




Dr. Brett Houlding, Department of Statistics UCC


On the use of Statistics in Law and Policy making, with Considerations on the UK re-arrest hazard data analysis




In this high-level presentation I will begin with a discussion on some of the problems in understanding fundamental statistical concepts that have occurred in political and judiciary decision making, and which have ultimately tarnished the reputation and reliance of statistics in these settings.  I will then present a case study concerning how statistical arguments were (in my opinion incorrectly) used to determine the policy and legality of the UK's historical DNA retention of individuals arrested but without further action taken, before this was deemed unlawful at the European Court of Human Rights.





Thursday, 25th October, from 2-3pm in Brookfield Health Sciences 304. All are welcome.


Segmentation-Constrained Mixture Modelling of Dynamic PET Scan Data.


Finbarr O’Sullivan



Positron Emission Tomography (PET) is a radio-tracer imaging technique that is extensively used in the management of certain type of cancer patients. It offers a unique capability to assess the functional metabolic status of tissue in-vivo, facilitating the potential to more fully adapt treatments to patients.  The realisation of this goal is highly dependent on appropriate analytic techniques for detailed quantitation of the data produced by scanning. We discuss a class of statistical mixture models that have been found quite useful.  The relation between these models and familiar multivariate techniques like principal components and factor analysis is highlighted.  Algorithms used for mixture analysis are implemented in R - there is a heavy reliance on quadratic programming tools. Some relevant aspects of the  statistical efficiency of the methods are discussed. We illustrate techniques with data from a series of  PET studies in patients with locally advanced Breast cancer.









Thursday October 11th from 12-1pm in BHSC G04 : Actuarial graduate opportunities

Irish Life will be on site Thursday October 11th from 12-1pm in BHSC G04 to talk through graduate opportunities and provide information on the company/roles available etc.




Friday, 12th October, from 12-1pm in WGB 304 : Talk


Friday, 12th October, from 12-1pm in WGB 304. All are welcome.




Bayesian modelling of critical illness insurance claim rates



This work focuses on the change of Critical illness insurance (CII) claim rates based on data for the years 1999 to 2005 and 2007-2009 provided by the Continuous Mortality Investigation (CMI). 

A better understanding of changes and uncertainty in CII rates is important, not least because of issues related to data availability, medical advances leading to more efficient diagnosis or treatment, and new trends in emerging policy products. The CII rates for the two periods are estimated using robust statistical modelling (based on Generalised Linear Model methodology) as well as the MCMC methodology. This work provides estimated claim rates for both accelerated and stand-alone policies, and for a wide range of ages. We also make the comparisons of our results to rates obtained by the CMI. Our results show a slight increase in CII rates compared with CMI tables and the previous rates, with changes being idiosyncratic to particular characteristics including policy type and duration, sum assured and age, gender and smoking status among policyholders.




Chunxiao Xie is currently a PhD student from Heriot-Watt University funded by ARC.  She finished her master with honour degree in the same university and had a first class bachelor certification from LSE. Supervised by Dr George Streftaris and Prof Angus Macdonald,she is working for Modelling, Measurement and Management of Longevity and Morbidity Risk program in IFOA which leads by Prof Andrew Cairns.  Her research is mainly focused on the Bayesian modelling of critical illness insurance claim rates for the 2007-2009 dataset and aimed to provide a more precise estimation of claim rates for recent years.


School of Mathematical Sciences

Eolaíochtaí Matamaiticiúla

Room 1-57, First Floor floor, T12 XF62