Evaluation of medical tests and biomarkers, systematic review methodology, non-randomised evaluations of health care interventions
Evaluation of medical tests and biomarkers
The evaluation of medical tests for purposes of diagnosis, prognosis, monitoring and predicting treatment benefit is a developing area, and historically has not been as well developed as methods for evaluating the effects of health care interventions. Many challenges exist in the design, execution, analysis and reporting of studies assessing tests.
Jon’s methodological research activity over the last decade has most closely focused on evaluating tests for diagnostic purposes, looking both at assessing test accuracy and the impact that tests have on patients. Projects have included looking at methods for evaluating accuracy in the absence of a reference standard, methods for meta-analysis, investigation of publication bias in test research, and the design and analysis of randomised trials of tests. He has also recently started to investigate methodological issues arising in the application of tests for purposes of monitoring.
In addition to methodological research, Jon has also provided statistical and methodological expertise for a portfolio of primary studies of research, from evaluation of blood tests for tuberculosis, to the use of PET imaging for the diagnosis and staging of cancer.
Jon leads the Cochrane Collaboration's diagnostic test accuracy activity, developing methods and providing support, training and editorial processes for producing high quality systematic reviews of diagnostic test accuracy. These reviews are beginning to be published in the Cochrane Library, the world's foremost source of evidence on the effects of healthcare interventions.
Systematic review methodology
Systematic reviews seek to collate all evidence that addresses a particular research question, appraise its relevance and validity, and produce a summary of its results, often in numerical form through meta-analytical methods. Jon has been involved in the appraisal and synthesis of evidence for around 20 years, first producing the Guidelines for Systematic Reviews at the NHS Centre for Reviews and Development (NHS CRD Report 4), contributing several key chapters to systematic review texts and handbooks (including the Cochrane Collaboration’s Handbook), and working on software algorithms for meta-analysis (including RevMan and the Stata metan ado function). His main methodological research contributions for reviews of health care interventions have been in assessing methods for meta-analysis of rare events and the choice of summary statistics for meta-analysis. More recently his research interests have focused on methods for the synthesis of evidence evaluating medical tests as described above.
In addition, Jon has co-authored over 30 systematic reviews across many different fields. These range from evaluating the benefits of circumcision to prevent transmission of HIV, through to evaluating the effectiveness of vaccines and drugs. More recently his published reviews have been on evaluations of tests, including screening for glaucoma, rapid diagnostic tests for malaria, and the use of imaging in the diagnosis of various diseases including multiple sclerosis, stroke and cancer.
Non-randomised evaluations of healthcare interventions
There are many situations in healthcare where no or very few randomised trials are available, but other evidence from non-randomised studies (NRS) exists. Sometimes the lack of RCT evidence is for practical reasons (such as difficulties in recruiting to a trial), logistic reasons (such as the infeasibility of randomising the introduction of legislation or organisational change), or because RCTs would be inadequate to address the question (such as when outcomes are rare or require extended follow-up). In many circumstances RCTs may simply never have been undertaken. Non-randomised comparisons are also made when comparing health outcomes between hospitals.
For clinicians and healthcare policy makers looking at non-randomised evidence it is important to gauge the likelihood and magnitude of possible biases that could affect the results of non-randomised studies in order to wisely interpret their results. Jon has been involved in work investigating the degree of bias in non-randomised studies, and studies to evaluate the ability of case-mix adjustment methods to correct estimates of treatment effects for selection biases inherent in non-randomised evaluations.