Pre-Conference Courses

The Scientific Programme Committee are pleased to announce the Pre-Conference courses on Sunday 21 August, this year we have Full Day and Half Day courses available.  

Registration has now closed.

Registration fees

 
 Pre-Conference Courses - Sunday 21 August
(not included in Registration for the conference) 
Early Bird Registration closes on Monday 30 May 
 Early Bird Registration - Full Day  £225
 Early Bird Registration - Half Day  £150
 Early Bird Registration - Student Full Day  £150
 Early Bird Registration - Student Half Day  £100
 
Registration Fees after Monday 30 May 
 Standard Registration -  Full Day  £275
 Standard Registration - Half Day  £200
 Standard Registration - Student Full Day  £200
 Standard Registration - Student Half Day  £150

Full Day Courses

 

PCC1: An Industry Approach to Bayesian Phase I Oncology Trials: Methodology and Implementation


Simon Wandel, Expert Statistical Methodologist, Novartis Pharma AG
Beat Neuenschwander, Biometrical Fellow, Novartis Pharma AG

Phase I trials are critical for the development of anticancer drugs. They aim to identify the maximum tolerable dose (MTD) or recommended phase II dose quickly, while controlling the risk of toxicity [1, 2, 3]. To address this challenge, adaptive Bayesian approaches [4, 5] are appealing. They enable the incorporation of trial-external evidence, which increases inferential precision and leads to better informed decisions. They are also encouraged by Health Authorities. However, they are still rarely implemented in practice [6], and often not well understood.

This course addresses these issues via a practical Bayesian approach [7, 8] to single- and combination-agent designs that has gained industry-wide interest. The constituents are: parsimonious yet flexible dose-toxicity models; prior distributions that discount trial-external evidence [9, 10]; intuitive metrics for decision making; easy-to-use WinBUGS software; good communication among various stakeholders.

We discuss methodological aspects, use case studies for illustration, and provide basic WinBUGS code. The course provides a self-contained introduction to Bayesian Phase I cancer trials as currently used in practice. The course willbe structured as follows:

1. Introduction: Phase I Oncology trials, clinical and statistical challenges
2. Inference and Decisions
3. Single Agent Phase I Trials: methodology, case studies, implementation (WinBUGS), exercises
4. Historical Data Priors: meta-analytic-predictive (MAP) priors, MAP priors for Phase I studies,implementation (WinBUGS)
5. Phase I Combination Trials: methodology, case studies, implementation (WinBUGS), exercises
6. The Importance of Communication: communication with non-statisticians, review boards and regulatoryagencies, dose escalation meetings, protocol writing
7. Concluding Remarks and Discussion

References

[1] Hamberg, Verweij. Phase I drug combination trial design: walking the tightrope. JCO 2009; [2] Le Tourneau, Lee, Siu. Dose escalation methods in phase I cancer trials. JNCI 2009; [3] Tighiouart, Rogatko. Dose finding in oncology – parametric methods. In: Dose Finding in Drug Development; Tin (Ed), 2006. [4] Spiegelhalter, Abrams, Myles. Bayesian Approaches to Clinical Trials and Health-Care Evaluation. Wiley, New York, 2004; [5] Berry, Carlin, Lee, Müller., Bayesian Adaptive Methods for Clinical Trials. Chapman & Hall, Boca Raton, FL, 2010; [6] Rogatko, Schoeneck, Jonas et al. Translation of innovative designs into phase I trials. JCO 2007; [7] Neuenschwander, Branson, Gsponer. Critical aspects of the Bayesian approach to phase I cancer trials. Stat Med 2008; [8] A Bayesian Industry Approach to Phase I Combination Trials in Oncology. In: Statistical Methods in Drug Combination Studies, Zhao and Yang (Eds). Boca Raton, FL: Chapman & Hall/CRC Press, 2015; [9] Neuenschwander, Capkun-Niggli, Branson, Spiegelhalter. Summarizing historical information on controls in clinical trials. Clin Trials 2010; [10] Schmidli, Gsteiger, Roychoudhury, O’Hagan, Spiegelhalter, Neuenschwander. Robust meta-analytic-predictive priors in clinical trials with historical control information. Biometrics 2014

PCC2: Demystifying causal inference in randomised trials

Ian White, MRC Biostatistics Unit, Cambridge
Graham Dunn, The University of Manchester
Sabine Landau, King’s College London
Richard Emsley, The University of Manchester

Randomised trials provide a gold standard design for assessing the effectiveness of an intervention or treatment, based on an intention to treat analysis. However, this suffices to only answer a narrow question about the effectiveness of offering the intervention, based on comparing the average outcome between randomised groups. Other important questions include “what is the effect of actually receiving the intervention?” and “how does the intervention work?”. To answer these questions, we require different analysis approaches, using methods drawn from the causal inference literature.

This course aims to introduce participants to the concepts of causal inference in randomised trials and the statistical methods used to answer various causal questions. It will focus on worked examples from different clinical areas, modelling issues and the key assumptions, and how these methods can be implemented in standard statistical software. No previous experience of causal inference or prior knowledge of any particular software package is required.

The morning session will give an introduction to the terminology of causal inference, the analysis of randomised trials following the intention-to-treat principle, and the problem caused by departures from randomised allocation. We will introduce alternative estimands including the complier average causal effect, and we will show how these can be estimated by two broad classes of methods: instrumental variables methods, which use the randomisation to estimate a causal model, and inverse probability weighting methods, which censor data after departures and then correct for selection bias under a no unmeasured confounders assumption.

The afternoon session will introduce the concept of mediation analysis, describing both its potential and outlining the major difficulties. We will introduce approaches that can deal with measured post-randomisation confounders and hidden confounding in trials: these extend the inverse probability weighting methods and the instrumental variables methods of the morning.

PCC3: Network Meta-Analysis for Decision Making

David Phillippo, University of Bristol
Sofia Dias, University of Bristol

When more than two treatments have been compared in randomised controlled trials for a certain patient population, Network Meta-Analysis (NMA), also termed Mixed Treatment Comparisons, can be used to estimate the effectiveness of each treatment relative to every other in a coherent way, even if certain treatment pairs have not been directly compared. The results of this coherent analysis can then be used to inform decisions, whether based on clinical or cost-effectiveness.

We will go through the methods for NMA described in the series of NICE DSU Technical Support Documents in Evidence Synthesis (www.nicedsu.org.uk), describing the concepts and assumptions and demonstrating how to fit NMA models for a variety of different data formats common to medical decision making problems in a Bayesian framework. Interpretation of results, and methods for assessing and explaining heterogeneity and inconsistency will also be described.

Using examples we will introduce NMA and the generic Bayesian framework, demonstrate how to fit the NMA models in WinBUGS using available code, discuss the implications of inconsistency and how to check for it, describe how to fit meta-regression models to explain heterogeneity and how these results are typically used in a cost-effectiveness analysis.

Course full: PCC4: Analysis of single and multi-omic data (SNP array, gene expression and methylation) and their integration in disease association studies

Juan R Gonzalez, Biomedical Research Park of Barcelona (PRBB)

Omic data is becoming very important not only in Genomics or Genetics, but also in other fields such as Epidemiology. For instance, current epidemiological studies are including several types of omic data including (mainly) SNP array, gene expression and methylation. Therefore, knowing existing methodologies for analyzing and integrating this type of data in disease association studies is crucial. In this course, statistical methods and bioinformatics tools for analyzing different single omic data and disease will be presented.We will emphasize how to integrate different types of omic data, as well as how multivariate methods can help in analysing them jointly.

The course will use existing R/Bioconductor packages for those purposes. Several pipelines to perform different omic data analyses will be provided to the ISCB attenders. Methods will be illustrated by using existing real datasets. Course participants will conduct practical exercises  consisting in analyzing real data generated in different projects.  Attendees will have access to course materials as well as a clone version of Rstudio including all the required packages in a virtual machine that will be provided during the course. This will facilitate the analyses and will allow the attendees to save time in preparing their computers for analyzing data.

All delegates should bring a laptop, all software will be available via the cloud with all the required packages, datasets, functions and slides.

Half Day Courses

AM

PCC5: Exploratory subgroup analyses in clinical trials

Gerd Rosenkranz, Statistical Scientist, Novartis Pharma AG

Subgroup analyses in clinical trials are conducted at all phases of drug development. In studies enrolling a broad patient population, consistency of treatment effect across subgroups indicates that the conclusions made regarding treatment benefit are applicable across various baseline characteristics and associated subpopulations, whilst substantial heterogeneity in treatment effect may suggest clinically relevant differential treatment effects across relevant subpopulations. At the extreme, the presence of substantial heterogeneity may imply that the conclusion of beneficial treatment effect is only relevant for a subset of the population.

Interpretation of subgroup analyses is challenging as subgroup findings can be due to chance. This is particularly likely when a large number of subgroup analyses are undertaken. On the other hand, clinical trials are generally not designed for detecting heterogeneity, so statistical tests may also miss important interactions due to low power.

EMA issued draft guidance on approaches to subgroup analysis and convened a workshop to discuss this guidance in 2014, with further work ongoing within industry groups to explore approaches to analysis that may be acceptable in regulatory submissions.

This course will outline the regulatory perspective of subgroup analyses and will present approaches which account for over-selection and/or selection bias of estimates of subgroup effects or which are borrowing information across subgroups. Operating characteristics of different methods will be compared to help identify situations under which certain methods are preferable. Examples will be provided for illustration.

Subgroup analysis is a topic of broad interest also beyond clinical trials. Particularly for the latter it has become more prominent with the notion of precision or personalized medicine which acknowledges that certain therapies may be most effective in specific patient populations.

 

PM

PCC6: An Introduction to the Joint Modeling of Longitudinal and Survival Data, with Applications in R

Dimitris Rizopoulos, University Medical Center, Rotterdam

In follow-up studies different types of outcomes are typically collected for each subject. These include longitudinally measured responses (e.g., biomarkers), and the time until an event of interest occurs (e.g., death, dropout). Often these outcomes are separately analyzed, but in many occasions it is of scientific interest to study their association. This type of research question has given rise in the class of joint models for longitudinal and time-to-event data. These models constitute an attractive paradigm for the analysis of follow-up data that is mainly applicable in two settings: First, when focus is on a survival outcome and we wish to account for the effect of endogenous time-dependents covariates measured with error, and second, when focus is on the longitudinal outcome and we wish to correct for non-random dropout.

This course is aimed at applied researchers and graduate students, and will provide a comprehensive introduction into this modeling framework. We will explain when these models should be used in practice, which are the key assumptions behind them, and how they can be utilized to extract relevant information from the data. Emphasis is given on applications, and after the end of the course participants will be able to define appropriate joint models to answer their questions of interest.

Necessary background for the course: This course assumes knowledge of basic statistical concepts, such as standard statistical inference using maximum likelihood, and regression models. In addition, basic knowledge of R would be beneficial but is not required.