Your browser doesn't support javascript.
loading
A review of measurement practice in studies of clinical decision support systems 1998-2017.
Scott, Philip J; Brown, Angela W; Adedeji, Taiwo; Wyatt, Jeremy C; Georgiou, Andrew; Eisenstein, Eric L; Friedman, Charles P.
Afiliación
  • Scott PJ; Centre for Healthcare Modelling and Informatics, University of Portsmouth, Portsmouth, UK.
  • Brown AW; Centre for Healthcare Modelling and Informatics, University of Portsmouth, Portsmouth, UK.
  • Adedeji T; Centre for Healthcare Modelling and Informatics, University of Portsmouth, Portsmouth, UK.
  • Wyatt JC; Wessex Institute of Health Research, University of Southampton, Southampton, UK.
  • Georgiou A; Australian Institute of Health Innovation, Macquarie University, Sydney, Australia.
  • Eisenstein EL; Duke Clinical Research Institute, Duke University Medical Center, Durham, North Carolina, USA.
  • Friedman CP; Schools of Medicine, Information and Public Health, University of Michigan, Ann Arbor, Michigan, USA.
J Am Med Inform Assoc ; 26(10): 1120-1128, 2019 10 01.
Article en En | MEDLINE | ID: mdl-30990522
ABSTRACT

OBJECTIVE:

To assess measurement practice in clinical decision support evaluation studies. MATERIALS AND

METHODS:

We identified empirical studies evaluating clinical decision support systems published from 1998 to 2017. We reviewed titles, abstracts, and full paper contents for evidence of attention to measurement validity, reliability, or reuse. We used Friedman and Wyatt's typology to categorize the studies.

RESULTS:

There were 391 studies that met the inclusion criteria. Study types in this cohort were primarily field user effect studies (n = 210) or problem impact studies (n = 150). Of those, 280 studies (72%) had no evidence of attention to measurement methodology, and 111 (28%) had some evidence with 33 (8%) offering validity evidence; 45 (12%) offering reliability evidence; and 61 (16%) reporting measurement artefact reuse.

DISCUSSION:

Only 5 studies offered validity assessment within the study. Valid measures were predominantly observed in problem impact studies with the majority of measures being clinical or patient reported outcomes with validity measured elsewhere.

CONCLUSION:

Measurement methodology is frequently ignored in empirical studies of clinical decision support systems and particularly so in field user effect studies. Authors may in fact be attending to measurement considerations and not reporting this or employing methods of unknown validity and reliability in their studies. In the latter case, reported study results may be biased and effect sizes misleading. We argue that replication studies to strengthen the evidence base require greater attention to measurement practice in health informatics research.
Asunto(s)
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Informática Médica / Sistemas de Apoyo a Decisiones Clínicas / Estudios de Evaluación como Asunto Tipo de estudio: Diagnostic_studies / Evaluation_studies / Prognostic_studies Idioma: En Revista: J Am Med Inform Assoc Asunto de la revista: INFORMATICA MEDICA Año: 2019 Tipo del documento: Article País de afiliación: Reino Unido

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Informática Médica / Sistemas de Apoyo a Decisiones Clínicas / Estudios de Evaluación como Asunto Tipo de estudio: Diagnostic_studies / Evaluation_studies / Prognostic_studies Idioma: En Revista: J Am Med Inform Assoc Asunto de la revista: INFORMATICA MEDICA Año: 2019 Tipo del documento: Article País de afiliación: Reino Unido