Your browser doesn't support javascript.
loading
Use of test accuracy study design labels in NICE's diagnostic guidance.
Olsen, M; Zhelev, Z; Hunt, H; Peters, J L; Bossuyt, P; Hyde, C.
Affiliation
  • Olsen M; 1Amsterdam University Medical Centers, Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam Public Health Research Institute, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands.
  • Zhelev Z; 2Exeter Test Group, Institute of Health Research, University of Exeter Medical School, St Lukes Campus, Exeter, EX1 2LU UK.
  • Hunt H; 2Exeter Test Group, Institute of Health Research, University of Exeter Medical School, St Lukes Campus, Exeter, EX1 2LU UK.
  • Peters JL; 2Exeter Test Group, Institute of Health Research, University of Exeter Medical School, St Lukes Campus, Exeter, EX1 2LU UK.
  • Bossuyt P; 1Amsterdam University Medical Centers, Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam Public Health Research Institute, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands.
  • Hyde C; 2Exeter Test Group, Institute of Health Research, University of Exeter Medical School, St Lukes Campus, Exeter, EX1 2LU UK.
Diagn Progn Res ; 3: 17, 2019.
Article in En | MEDLINE | ID: mdl-31517065
ABSTRACT

BACKGROUND:

A variety of study designs are available to evaluate the accuracy of tests, but the terms used to describe these designs seem to lack clarity and standardization. We investigated if this was the case in the diagnostic guidance of the National Institute of Care and Health Excellence (NICE), an influential source of advice on the value of tests.

OBJECTIVES:

To describe the range of study design terms and labels used to distinguish study designs in NICE Diagnostic Guidance and the underlying evidence reports.

METHODS:

We carefully examined all NICE Diagnostic Guidance that has been developed from inception in 2011 until 2018 and the corresponding diagnostic assessment reports that summarized the evidence, focusing on guidance where tests were considered for diagnosis. We abstracted labels used to describe study designs and investigated what labels were used when studies were weighted differently because of their design (in terms of validity of evidence), in relevant sections. We made a descriptive analysis to assess the range of labels and also categorized labels by design features.

RESULTS:

From a total of 36 pieces of guidance, 20 (56%) were eligible and 17 (47%) were included in our analysis. We identified 53 unique design labels, of which 19 (36%) were specific to diagnostic test accuracy designs. These referred to a total of 12 study design features. Labels were used in assigning different weights to studies in seven of the reports (41%) but never in the guidance documents.

CONCLUSION:

Our study confirms a lack of clarity and standardization of test accuracy study design terms. There seems to be scope to reduce and harmonize the number of terms and still capture the design features that were deemed influential by those compiling the evidence reports. This should help decision makers in quickly identifying subgroups of included studies that should be weighted differently because their designs are more susceptible to bias.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Type of study: Diagnostic_studies / Guideline Language: En Journal: Diagn Progn Res Year: 2019 Document type: Article Affiliation country: Países Bajos

Full text: 1 Collection: 01-internacional Database: MEDLINE Type of study: Diagnostic_studies / Guideline Language: En Journal: Diagn Progn Res Year: 2019 Document type: Article Affiliation country: Países Bajos