Short course

Statistical methods for risk prediction and prognostic models

Start date
15th - 17th October 2024
Duration
3 days - but all course material will be made available 7 days in advance and for 2 weeks afterwards
Time commitment
9am-5pm UK time
Mode
Online with recorded lectures followed by live question & answer sessions and live faculty-led computer practical sessions
Cost
Student - £499; Academic - £599; Industry - £699 (Also a UOB Staff Discount category)
Level
CPD

This online course provides a thorough foundation of statistical methods for developing and validating risk prediction and prognostic models in healthcare research.

It is delivered over 3 days and focuses on key principles for model development, internal validation, and external validation. Our focus is on multivariable models for individualised prediction of future outcomes (prognosis), although many concepts also apply to models for predicting existing disease (diagnosis). We focus mainly on binary and time-to-event outcomes, though continuous outcomes is also covered in special topics.

Computer practicals in R or Stata are included on all three days, and participants can choose whether to focus on logistic regression examples (for binary outcomes) or Cox/flexible parametric survival examples (for time-to-event outcomes). All code is already written, allowing participants to focus more on their understanding of methods and interpretation of results.

We recommend participants have a good understanding of key statistical principles and measures (such as effect estimates, confidence intervals and p-values) and the ability to apply and interpret regression models.

Teaching is via a combination of recorded lectures, live computer practicals, and live question and answer sessions following each lecture/session. There will be opportunities to meet with faculty to ask specific questions about personal research queries.

Further information can be found on the Prognosis Research website. 

Programme team:

Lucinda Archer
Kym Snell
Joie Ensor
Professor Richard Riley
Professor Gary Collins 
Dr Laura Bonnett

Dates of the course:

15th - 17th October 2024

Time commitment:

Ideally participants should undertake the course live (9am to 5pm UK time), but all course material (e.g. lecture videos, computer practicals etc) will be made available a week in advance and for 2 weeks afterwards, to provide plenty of time and flexibility for participants to work through the material in their own time. 

Course content:

Day 1 :

  • The day begins with an overview of the rationale and phases of prediction model research.
  • It then outlines model specification, focusing on logistic regression for binary outcomes and Cox regression or flexible parametric survival models for time to event outcomes.
  • Model development topics are then covered, including identifying candidate predictors, handling of missing data, modelling continuous predictors using fractional polynomials or restricted cubic splines for non-linear functions, and variable selection procedures.

Day 2:

  • The day focuses on how models are overfitted for the data in which they were derived, and thus often do not generalise to other datasets.
  • Internal validation strategies are outlined to identify and adjust for overfitting. In particular cross-validation and bootstrapping are covered to estimate the optimism and shrink the model coefficients accordingly.
  • Related approaches such as LASSO and elastic net are also discussed.
  • Statistical measures of model performance are introduced for discrimination (such as the C-statistic and D-statistic) and calibration (calibration-in-the-large, calibration plots, calibration slope, calibration curve).
  • With all this knowledge, we then discuss sample size considerations for model development and validation, and new software to implement sample size calculations.

Day 3:

  • Day 3 focuses on the need for model performance to be evaluated in new data to assess its generalisability, namely external validation.
  • A framework for different types of external validation studies is provided, and the potential importance of model updating strategies (such as re-calibration techniques) are considered.
  • Novel topics are then considered, including: the use of pseudo-values to allow calibration curves in a survival model setting; the development and validation of models using large datasets (e.g. from e-health records) or multiple studies; the use of meta-analysis methods for summarising the performance of models across multiple studies or clusters; the role of net benefit and decision curve analysis to understand the potential role of a model for clinical decision making; and practical guidance about different ways in which prediction and prognostic models can be presented.

How to apply:

Registration is open, you can register for the course using a debit/credit on the university's online shop. The courses have minimum required attendance levels and the University reserves the right to cancel or postpone the course if the minimum required number of delegates has not been achieved for the course.

For enquiries, please complete our enquiry form

See how the University of Birmingham uses your data, view the Event attendee privacy notice

The course is aimed at individuals that want to learn how to develop and validate risk prediction and prognostic models, specifically for binary or time-to-event clinical outcomes (though continuous outcomes is also covered). An understanding of key statistical principles and measures (such as effect estimates, confidence intervals and p-values) and the ability to apply and interpret regression models is essential. Previous experience of using R or Stata for data analysis is also highly recommended, though computer code is already written in the practicals.

 

Accreditation:

The course is not accredited. 

Course results:

Certificate of completion confirming hours of completed study.

Learning outcomes:

By the end of the course, participants will:

  • Understand phases of prediction model research
  • Know the core statistical methods for developing a prediction model, and be able to apply them in R or Stata
  • Understand the differences between models for binary and time-to-event outcomes
  • Understand the use of logistic regression, Cox regression, and flexible parametric survival models in the context of prediction modelling
  • Understand how to model non-linear relationships for continuous variables using splines or fractional polynomials
  • Know how to derive predictions for new individuals after developing a prediction model
  • Understand the issue of overfitting and how to limit and examine this
  • Know the role of penalisation and shrinkage methods, including uniform shrinkage, the lasso and elastic net
  • Know how to internally validate a prediction model after model development, using bootstrapping or cross-validation in R or Stata
  • Understand how to produce optimism-adjusted estimates of model performance
  • Know the importance and role of discrimination, calibration and clinical utility measures, and how to derive them in R or Stata
  • Understand how to undertake an external validation study
  • Understand how to calculate the sample size required for model development and model validation
  • Appreciate different approaches to variable selection, including lasso and elastic net, and the instability of these approaches
  • Recognise the importance of the TRIPOD reporting guideline and different formats for presentation of a model
  • Appreciate opportunities for prediction modelling with big data and IPD meta-analysis datasets
  • Appreciate methods for handling missing data, competing risks, pseudo-observations and continuous outcomes