Test Evaluation Research Group (TERG)


hand building tower of wooden blocks with medical symbols

Effective healthcare depends on using tests that do more good than harm. The Test Evaluation Research Group (TERG) is an internationally recognised group producing and evaluating evidence on the performance of medical tests, as well as for innovation in test evaluation methods. We work across the spectrum of test evaluation, from biological variability, through diagnostic accuracy and into decision–making and patient health impact.

Theme Lead

Professor Jon DeeksProfessor Jon Deeks

Theme Lead

Professor of Biostatistics
Deputy Director

Institute of Applied Health Research

View profile

Meet the team

About the research

TERG undertakes both methods research (developing and evaluating methods for designing, analysing and reporting studies) and applied health research (applying the best methods to health research questions collaborating with scientific and clinical colleagues).  We are involved in many (and lead several) international collaborations working to set the standards for the design, delivery and analysis of primary studies for tests.

  1. Primary studies of diagnostic test accuracy

Comprising statisticians, clinicians and systematic reviewers, we have a broad portfolio of expertise in planning, delivery and analysis of primary studies designed to assess the use of tests in healthcare.

 2. Systematic reviews of diagnostic test accuracy.

TERG is also recognised for its work within the field of systematic reviews and meta-analysis of test evaluation, including in particular: comparative test accuracy, tailored meta-analysis, multiple thresholds and investigation of heterogeneity.

 The Test Evaluation Research Group has close links with the Cochrane Screening and Diagnostic Tests Methods Group (SDTMG) and provides the editorial base for systematic reviews of diagnostic test accuracy, published in The Cochrane Library. This service includes organisation of the peer review and editorial approvals for the methods content of Cochrane DTA (Diagnostic Test Accuracy) protocols and reviews.

 3. Estimating the technical properties of biomarkers

 4. Ensuring test evaluations are fit-for-purpose. We are also active in other areas of methodological innovation, particularly in developing theory and practical applications for assessing how tests impact on decision–making and patient health (clinical effectiveness).

 5. Prognosis and prediction rules

 6. Screening

 7. Monitoring

Current projects

  • We are running a number of projects within the Diagnostics and Biomarkers cross-cutting theme at the NIHR Birmingham Biomedical Research Centre 
  • The Diagnostics and Biomarkers theme works with the three key research themes in the NIHR Biomedical Research Centre to develop, design and deliver portfolios of test research studies and to advance the methodology behind early evaluative studies.
  • The over-arching objective is to ensure that tests and biomarkers developed or used in the research themes undergo appropriate evaluation and assessment before being utilised as clinical tests or outcome measures, as well as improving the methodological basis upon which such assessments are made.

BRC Research Projects

Primary

  • Systematic review of accuracy of imaging tests for diagnosis of Rheumatoid Arthritis (RA)  in patients with early symptoms
  • Systematic review of accuracy of autoantibody tests and prediction rules for diagnosis of RA in patients with early symptoms
  • Systematic review of the measurement properties of grip strength in different disease groups
  • Primary study of  measurement properties of RA tissue biomarkers to assess their value as tests and outcome measures in early-phase trials, particularly looking at flow cytometric measurement of fibroblast groups
  • Estimating measurement properties of scoring systems based on glands identified from lip biopsies for Sjögren's syndrome
  • To establish the validity of functional markers in sarcopenia patients, we have designed a study to assess biological variability within the existing cohort study
  • Developing biomarker combination signatures and evaluation of their clinical accuracy in Gastroenterology/Liver disease

Methods

  • Modelling optimal use of tests for monitoring disease progression and recurrence. Monitoring to identify disease recurrence or progression is common, often with limited evidence to support the tests used, subsequent decisions, frequency and duration of monitoring. We aim to develop methods for designing evidence-based monitoring strategies and estimating measurement error, a key consideration in selecting monitoring tests.
  • Models to assess the biological variability of count outcomes and methods for combining estimates of variability across studies. We are using statistical techniques to evaluate the ability of methods to estimate variability parameters when differing numbers of glands are obtained; we are also looking to investigate the impact of variability between glands and patients to allow sample sizes to be appropriately calculated for such studies. In addition, we are reviewing the methods for undertaking systematic reviews of biological variability studies.
  • We are looking at the potential for using routine data to provide information about test performance.  We have identified a statistical method known as the variogram which may be able to estimate measurement error from routine monitoring data. We have developed links with UHB and are looking at opportunities to evaluate the impact of introduction of a new test or monitoring pathway from routine data.

Applied Health Research (Primary and Secondary)

 Primary Current

Secondary Current

  • ROCkeTS (Refining Ovarian Cancer Test Accuracy Scores)A series of 3 test accuracy systematic reviews of the accuracy of symptom combinations, biomarkers and test combinations for detecting ovarian cancer in pre and post-menopausal women
  • CATCH-ME (Characterising Atrial fibrillation by Translating its Causes into Health Modifiers in the Elderly)

Primary Completed since 2018

Secondary Completed since 2018

Methodology Research:

Current

  • CONSORT–AI Extension 
  • Evaluation of diagnostic imaging test performance: Including interobserver variability and time to diagnosis. NIHR doctoral fellowship started April 2020. The project involves methodological research through systematic reviews, case and simulation studies. If interested please get in touch with Laura Quinn
  • Methods to Evaluate Screening 
  • Opportunities and Challenges in Using Routine Data Sources to Evaluate Biomarker
  • SPIRIT–AI Extension  
  • TEST (Test Evaluation Using Structured Tools). UK MRC funded project, launched 1st March 2020. If you evaluate diagnostic tests (by study, review or policy discussion) and are interested in contributing to the design of a new tool to decide which studies to use to most efficiently evaluate a diagnostic, get in touch at ferrantl@bham.ac.uk
  • “Ensuring test evaluation research is applicable in practice: investigating the effects of routine data on the validity of test accuracy meta–analyses”, an MRC Clinician Scientist fellowship being undertaken by Dr Brian Willis.

Completed since 2018

Conferences, Workshops and Seminars

The group organises or is affiliated with a number of research events, all of which promote understanding and knowledge of test evaluation, and/or medical statistics.

We run two CPD courses:

1. Systematic reviews and meta-analyses of diagnostic test accuracy 2018: We provide a 3–day CPD course in partnership with the University of Amsterdam, on how to conduct systematic reviews and meta-analyses of diagnostic test accuracy. The course has run annually since 2014,  however it will not run in 2020.                             

2. Evaluating medical tests (EMT): How do we tell if this biomarker or diagnostic test is any good? In 2019 we launched a new 3–day course to provide training in how to conduct primary studies that evaluate biomarker and diagnostic evaluations. Aimed at research teams who are developing these tests, the course covers the design, analysis and interpretation of primary studies, as well as an awareness of the portfolio of studies test developers are likely to need to undertake at difference stages of the translation pathway. The course first ran in May 2019. Due to COVID19 the next course is currently projected to run in the first half of 2021.

For CPD enquiries, please contact: Natasha Maguire

Regular meetings and seminars:

  • TERG Sessions: Every month our research group holds a business meeting, which serves a dual purpose of providing methodological support to teams evaluating medical tests. Every meeting reserves a one–hour slot for the team to present us their project and quandary (20 mins) and provide round–table discussion on methodological solutions.

For enquiries, please contact the meeting coordinator:  Lavinia Ferrante di Ruffano 

  • Biomarker Club:  In association with the NIHR BRC Birmingham, TERG launched a Biomarker Club for University of Birmingham researchers who are actively involved in developing and/or evaluating medical tests. These meetings aim to provide an opportunity to learn test evaluation methods, share research challenges, and network. Due to COVID19 the next meeting has been pencilled in for early 2021. For enquiries please contact TERG@contacts.bham.ac.uk

Conferences:

  • MEMTAB: In 2008 we launched the world’s first symposium to be focussed on methods for evaluating medical tests. This international symposium attracts researchers, healthcare workers, policy makers and manufacturers who are actively involved in the development, evaluation or regulation of tests, (bio)markers, models, tools, apps, devices or any other modality used for the purpose of diagnosis, prognosis, risk stratification or (disease or therapy) monitoring. Located at the University of Birmingham in 2008, 2010 and 2013, the symposium is now called ‘Methods for Evaluating Medical prediction Models, Tests and Biomarkers (MEMTAB)’, and is hosted in turn by several world–leading centres for medical test evaluation. MEMTAB2020 is taking place at the University of Leuven's EPI-Centre, 2–3rd July 2020.

Online resources: Free materials for conducting reviews and meta–analyses of diagnostic test accuracy are available through Cochrane’s Screening and Diagnostic Test Methods group as training materials and the DTA handbook

Key publications

Freeman K, Dinnes J, Chuchu N, Takwoingi Y, Bayliss SE, Matin RN, Jain A, Walter FM, Williams HC, Deeks JJ. Algorithm-based smartphone ‘apps’ for assessment of the risk of skin cancer in adults: a systematic review of diagnostic accuracy studies. BMJ 2020; 368 https://doi.org/10.1136/bmj.m127

Wolff RF, Moons KG, Riley RD, Whiting PF, Westwood M, Collins GS, Reitsma JB, Kleijnen J, Mallett S: PROBAST: A tool to assess the risk of bias and applicability of prediction model studies. Annals of Internal Medicine 2019, 170(1):51-58.

Takwoingi Y, Whitworth H, Rees-Roberts M, Badhan A, Partlett C, Green N, Boakye A, Lambie H, Marongiu L, Jit M, White P, Deeks J, Kon O, Lalvani A on behalf of the IGRAs for Diagnostic Evaluation of Active TB (IDEA) Study Group. Interferon gamma release assays for Diagnostic Evaluation of Active tuberculosis (IDEA): test accuracy study and economic evaluation. Health Technol Assess 2019;23(23).

Kasivisvanathan V, Rannikko AS, Borghi M, Panebianco V, Mynderse LA,… Deeks J, Takwoingi Y, Emberton M, Moore CM; PRECISION Study Group Collaborators. MRI-Targeted or Standard Biopsy for Prostate-Cancer Diagnosis. N Engl J Med. 2018 May 10;378(19):1767-1777. doi: 10.1056/NEJMoa1801993

Whiting P, Leeflang M, de Salis I, Mustafa RA, Santesso N, Gopalakrishna G, Cooney G, Jesper E, Thomas J, Davenport C. How to write a plain language summary for a diagnostic test accuracy review.  J Clin Epidemiol 103 (2018) 112-119.

Takwoingi Y, Leeflang MM, Deeks JJ. Empirical evidence of the importance of comparative studies of diagnostic test accuracy. Ann Intern Med. 2013;158(7):544-54.

Ferrante di Ruffano L, Hyde CJ, McCaffery KJ, Bossuyt PM, Deeks JJ. Assessing the value of diagnostic tests: a framework for designing and evaluating trials. 2012. BMJ. http://dx.doi.org/10.1136/bmj.e686 

Contact

Please send an email to TERG@contacts.bham.ac.uk for all enquiries, unless indicated in the relevant section above.