Your browser doesn't support javascript.
loading
Does poor methodological quality of prediction modeling studies translate to poor model performance? An illustration in traumatic brain injury.
Helmrich, Isabel R A Retel; Mikolic, Ana; Kent, David M; Lingsma, Hester F; Wynants, Laure; Steyerberg, Ewout W; van Klaveren, David.
Afiliação
  • Helmrich IRAR; Department of Public Health, Center for Medical Decision Making, Erasmus MC-University Medical Center, Rotterdam, the Netherlands. i.retelhelmrich@erasmusmc.nl.
  • Mikolic A; Department of Public Health, Center for Medical Decision Making, Erasmus MC-University Medical Center, Rotterdam, the Netherlands.
  • Kent DM; Predictive Analytics and Comparative Effectiveness Center, Institute for Clinical Research and Health Policy Studies/Tufts Medical Center, Boston, USA.
  • Lingsma HF; Department of Public Health, Center for Medical Decision Making, Erasmus MC-University Medical Center, Rotterdam, the Netherlands.
  • Wynants L; Department of Epidemiology, School for Public Health and Primary Care, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands.
  • Steyerberg EW; Department of Public Health, Center for Medical Decision Making, Erasmus MC-University Medical Center, Rotterdam, the Netherlands.
  • van Klaveren D; Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, The Netherlands.
Diagn Progn Res ; 6(1): 8, 2022 May 05.
Article em En | MEDLINE | ID: mdl-35509061
ABSTRACT

BACKGROUND:

Prediction modeling studies often have methodological limitations, which may compromise model performance in new patients and settings. We aimed to examine the relation between methodological quality of model development studies and their performance at external validation.

METHODS:

We systematically searched for externally validated multivariable prediction models that predict functional outcome following moderate or severe traumatic brain injury. Risk of bias and applicability of development studies was assessed with the Prediction model Risk Of Bias Assessment Tool (PROBAST). Each model was rated for its presentation with sufficient detail to be used in practice. Model performance was described in terms of discrimination (AUC), and calibration. Delta AUC (dAUC) was calculated to quantify the percentage change in discrimination between development and validation for all models. Generalized estimation equations (GEE) were used to examine the relation between methodological quality and dAUC while controlling for clustering.

RESULTS:

We included 54 publications, presenting ten development studies of 18 prediction models, and 52 external validation studies, including 245 unique validations. Two development studies (four models) were found to have low risk of bias (RoB). The other eight publications (14 models) showed high or unclear RoB. The median dAUC was positive in low RoB models (dAUC 8%, [IQR - 4% to 21%]) and negative in high RoB models (dAUC - 18%, [IQR - 43% to 2%]). The GEE showed a larger average negative change in discrimination for high RoB models (- 32% (95% CI - 48 to - 15) and unclear RoB models (- 13% (95% CI - 16 to - 10)) compared to that seen in low RoB models.

CONCLUSION:

Lower methodological quality at model development associates with poorer model performance at external validation. Our findings emphasize the importance of adherence to methodological principles and reporting guidelines in prediction modeling studies.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Guideline / Prognostic_studies / Risk_factors_studies Idioma: En Revista: Diagn Progn Res Ano de publicação: 2022 Tipo de documento: Article País de afiliação: Holanda

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Guideline / Prognostic_studies / Risk_factors_studies Idioma: En Revista: Diagn Progn Res Ano de publicação: 2022 Tipo de documento: Article País de afiliação: Holanda