Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Stat Med ; 43(3): 514-533, 2024 02 10.
Artículo en Inglés | MEDLINE | ID: mdl-38073512

RESUMEN

Missing data is a common problem in medical research, and is commonly addressed using multiple imputation. Although traditional imputation methods allow for valid statistical inference when data are missing at random (MAR), their implementation is problematic when the presence of missingness depends on unobserved variables, that is, the data are missing not at random (MNAR). Unfortunately, this MNAR situation is rather common, in observational studies, registries and other sources of real-world data. While several imputation methods have been proposed for addressing individual studies when data are MNAR, their application and validity in large datasets with multilevel structure remains unclear. We therefore explored the consequence of MNAR data in hierarchical data in-depth, and proposed a novel multilevel imputation method for common missing patterns in clustered datasets. This method is based on the principles of Heckman selection models and adopts a two-stage meta-analysis approach to impute binary and continuous variables that may be outcomes or predictors and that are systematically or sporadically missing. After evaluating the proposed imputation model in simulated scenarios, we illustrate it use in a cross-sectional community survey to estimate the prevalence of malaria parasitemia in children aged 2-10 years in five regions in Uganda.


Asunto(s)
Investigación Biomédica , Niño , Humanos , Estudios Transversales , Uganda/epidemiología
2.
BMC Med Res Methodol ; 24(1): 91, 2024 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-38641771

RESUMEN

Observational data provide invaluable real-world information in medicine, but certain methodological considerations are required to derive causal estimates. In this systematic review, we evaluated the methodology and reporting quality of individual-level patient data meta-analyses (IPD-MAs) conducted with non-randomized exposures, published in 2009, 2014, and 2019 that sought to estimate a causal relationship in medicine. We screened over 16,000 titles and abstracts, reviewed 45 full-text articles out of the 167 deemed potentially eligible, and included 29 into the analysis. Unfortunately, we found that causal methodologies were rarely implemented, and reporting was generally poor across studies. Specifically, only three of the 29 articles used quasi-experimental methods, and no study used G-methods to adjust for time-varying confounding. To address these issues, we propose stronger collaborations between physicians and methodologists to ensure that causal methodologies are properly implemented in IPD-MAs. In addition, we put forward a suggested checklist of reporting guidelines for IPD-MAs that utilize causal methods. This checklist could improve reporting thereby potentially enhancing the quality and trustworthiness of IPD-MAs, which can be considered one of the most valuable sources of evidence for health policy.


Asunto(s)
Causalidad , Metaanálisis como Asunto , Humanos , Proyectos de Investigación/normas , Lista de Verificación/métodos , Lista de Verificación/normas , Guías como Asunto , Interpretación Estadística de Datos
3.
Stat Med ; 42(19): 3508-3528, 2023 08 30.
Artículo en Inglés | MEDLINE | ID: mdl-37311563

RESUMEN

External validation of the discriminative ability of prediction models is of key importance. However, the interpretation of such evaluations is challenging, as the ability to discriminate depends on both the sample characteristics (ie, case-mix) and the generalizability of predictor coefficients, but most discrimination indices do not provide any insight into their respective contributions. To disentangle differences in discriminative ability across external validation samples due to a lack of model generalizability from differences in sample characteristics, we propose propensity-weighted measures of discrimination. These weighted metrics, which are derived from propensity scores for sample membership, are standardized for case-mix differences between the model development and validation samples, allowing for a fair comparison of discriminative ability in terms of model characteristics in a target population of interest. We illustrate our methods with the validation of eight prediction models for deep vein thrombosis in 12 external validation data sets and assess our methods in a simulation study. In the illustrative example, propensity score standardization reduced between-study heterogeneity of discrimination, indicating that between-study variability was partially attributable to case-mix. The simulation study showed that only flexible propensity-score methods (allowing for non-linear effects) produced unbiased estimates of model discrimination in the target population, and only when the positivity assumption was met. Propensity score-based standardization may facilitate the interpretation of (heterogeneity in) discriminative ability of a prediction model as observed across multiple studies, and may guide model updating strategies for a particular target population. Careful propensity score modeling with attention for non-linear relations is recommended.


Asunto(s)
Benchmarking , Grupos Diagnósticos Relacionados , Humanos , Simulación por Computador
4.
Pharmacoepidemiol Drug Saf ; 32(9): 1032-1048, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37068170

RESUMEN

PURPOSE: Heterogeneous results from multi-database studies have been observed, for example, in the context of generating background incidence rates (IRs) for adverse events of special interest for SARS-CoV-2 vaccines. In this study, we aimed to explore different between-database sources of heterogeneity influencing the estimated background IR of venous thromboembolism (VTE). METHODS: Through forest plots and random-effects models, we performed a qualitative and quantitative assessment of heterogeneity of VTE background IR derived from 11 databases from 6 European countries, using age and gender stratified background IR for the years 2017-2019 estimated in two studies. Sensitivity analyses were performed to assess the impact of selection criteria on the variability of the reported IR. RESULTS: A total of 54 257 284 subjects were included in this study. Age-gender pooled VTE IR varied from 5 to 421/100 000 person-years and IR increased with increasing age for both genders. Wide confidence intervals (CIs) demonstrated considerable within-data-source heterogeneity. Selecting databases with similar characteristics had only a minor impact on the variability as shown in forest plots and the magnitude of the I2 statistic, which remained large. Solely including databases with primary care and hospital data resulted in a noticeable decrease in heterogeneity. CONCLUSIONS: Large variability in IR between data sources and within age group and gender strata warrants the need for stratification and limits the feasibility of a meaningful pooled estimate. A more detailed knowledge of the data characteristics, operationalisation of case definitions and cohort population might support an informed choice of the adequate databases to calculate reliable estimates.


Asunto(s)
COVID-19 , Tromboembolia Venosa , Humanos , Masculino , Femenino , Tromboembolia Venosa/epidemiología , Tromboembolia Venosa/prevención & control , Incidencia , Vacunas contra la COVID-19 , COVID-19/epidemiología , SARS-CoV-2
5.
Stat Med ; 40(15): 3533-3559, 2021 07 10.
Artículo en Inglés | MEDLINE | ID: mdl-33948970

RESUMEN

Prediction models often yield inaccurate predictions for new individuals. Large data sets from pooled studies or electronic healthcare records may alleviate this with an increased sample size and variability in sample characteristics. However, existing strategies for prediction model development generally do not account for heterogeneity in predictor-outcome associations between different settings and populations. This limits the generalizability of developed models (even from large, combined, clustered data sets) and necessitates local revisions. We aim to develop methodology for producing prediction models that require less tailoring to different settings and populations. We adopt internal-external cross-validation to assess and reduce heterogeneity in models' predictive performance during the development. We propose a predictor selection algorithm that optimizes the (weighted) average performance while minimizing its variability across the hold-out clusters (or studies). Predictors are added iteratively until the estimated generalizability is optimized. We illustrate this by developing a model for predicting the risk of atrial fibrillation and updating an existing one for diagnosing deep vein thrombosis, using individual participant data from 20 cohorts (N = 10 873) and 11 diagnostic studies (N = 10 014), respectively. Meta-analysis of calibration and discrimination performance in each hold-out cluster shows that trade-offs between average and heterogeneity of performance occurred. Our methodology enables the assessment of heterogeneity of prediction model performance during model development in multiple or clustered data sets, thereby informing researchers on predictor selection to improve the generalizability to different settings and populations, and reduce the need for model tailoring. Our methodology has been implemented in the R package metamisc.


Asunto(s)
Proyectos de Investigación , Calibración , Humanos
6.
Stat Med ; 38(9): 1601-1619, 2019 04 30.
Artículo en Inglés | MEDLINE | ID: mdl-30614028

RESUMEN

Multinomial Logistic Regression (MLR) has been advocated for developing clinical prediction models that distinguish between three or more unordered outcomes. We present a full-factorial simulation study to examine the predictive performance of MLR models in relation to the relative size of outcome categories, number of predictors and the number of events per variable. It is shown that MLR estimated by Maximum Likelihood yields overfitted prediction models in small to medium sized data. In most cases, the calibration and overall predictive performance of the multinomial prediction model is improved by using penalized MLR. Our simulation study also highlights the importance of events per variable in the multinomial context as well as the total sample size. As expected, our study demonstrates the need for optimism correction of the predictive performance measures when developing the multinomial logistic prediction model. We recommend the use of penalized MLR when prediction models are developed in small data sets or in medium sized data sets with a small total sample size (ie, when the sizes of the outcome categories are balanced). Finally, we present a case study in which we illustrate the development and validation of penalized and unpenalized multinomial prediction models for predicting malignancy of ovarian cancer.


Asunto(s)
Funciones de Verosimilitud , Modelos Logísticos , Tamaño de la Muestra , Simulación por Computador , Humanos
7.
Res Synth Methods ; 14(2): 193-210, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36200133

RESUMEN

A common problem in the analysis of multiple data sources, including individual participant data meta-analysis (IPD-MA), is the misclassification of binary variables. Misclassification may lead to biased estimators of model parameters, even when the misclassification is entirely random. We aimed to develop statistical methods that facilitate unbiased estimation of adjusted and unadjusted exposure-outcome associations and between-study heterogeneity in IPD-MA, where the extent and nature of exposure misclassification may vary across studies. We present Bayesian methods that allow misclassification of binary exposure variables to depend on study- and participant-level characteristics. In an example of the differential diagnosis of dengue using two variables, where the gold standard measurement for the exposure variable was unavailable for some studies which only measured a surrogate prone to misclassification, our methods yielded more accurate estimates than analyses naive with regard to misclassification or based on gold standard measurements alone. In a simulation study, the evaluated misclassification model yielded valid estimates of the exposure-outcome association, and was more accurate than analyses restricted to gold standard measurements. Our proposed framework can appropriately account for the presence of binary exposure misclassification in IPD-MA. It requires that some studies supply IPD for the surrogate and gold standard exposure, and allows misclassification to follow a random effects distribution across studies conditional on observed covariates (and outcome). The proposed methods are most beneficial when few large studies that measured the gold standard are available, and when misclassification is frequent.


Asunto(s)
Teorema de Bayes , Humanos , Simulación por Computador
8.
J Clin Epidemiol ; 145: 29-38, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35045316

RESUMEN

OBJECTIVES: Among ID studies seeking to make causal inferences and pooling individual-level longitudinal data from multiple infectious disease cohorts, we sought to assess what methods are being used, how those methods are being reported, and whether these factors have changed over time. STUDY DESIGN AND SETTING: Systematic review of longitudinal observational infectious disease studies pooling individual-level patient data from 2+ studies published in English in 2009, 2014, or 2019. This systematic review protocol is registered with PROSPERO (CRD42020204104). RESULTS: Our search yielded 1,462 unique articles. Of these, 16 were included in the final review. Our analysis showed a lack of causal inference methods and of clear reporting on methods and the required assumptions. CONCLUSION: There are many approaches to causal inference which may help facilitate accurate inference in the presence of unmeasured and time-varying confounding. In observational ID studies leveraging pooled, longitudinal IPD, the absence of these causal inference methods and gaps in the reporting of key methodological considerations suggests there is ample opportunity to enhance the rigor and reporting of research in this field. Interdisciplinary collaborations between substantive and methodological experts would strengthen future work.


Asunto(s)
Enfermedades Transmisibles , Causalidad , Enfermedades Transmisibles/epidemiología , Humanos , Estudios Longitudinales
9.
BMJ ; 378: e069881, 2022 07 12.
Artículo en Inglés | MEDLINE | ID: mdl-35820692

RESUMEN

OBJECTIVE: To externally validate various prognostic models and scoring rules for predicting short term mortality in patients admitted to hospital for covid-19. DESIGN: Two stage individual participant data meta-analysis. SETTING: Secondary and tertiary care. PARTICIPANTS: 46 914 patients across 18 countries, admitted to a hospital with polymerase chain reaction confirmed covid-19 from November 2019 to April 2021. DATA SOURCES: Multiple (clustered) cohorts in Brazil, Belgium, China, Czech Republic, Egypt, France, Iran, Israel, Italy, Mexico, Netherlands, Portugal, Russia, Saudi Arabia, Spain, Sweden, United Kingdom, and United States previously identified by a living systematic review of covid-19 prediction models published in The BMJ, and through PROSPERO, reference checking, and expert knowledge. MODEL SELECTION AND ELIGIBILITY CRITERIA: Prognostic models identified by the living systematic review and through contacting experts. A priori models were excluded that had a high risk of bias in the participant domain of PROBAST (prediction model study risk of bias assessment tool) or for which the applicability was deemed poor. METHODS: Eight prognostic models with diverse predictors were identified and validated. A two stage individual participant data meta-analysis was performed of the estimated model concordance (C) statistic, calibration slope, calibration-in-the-large, and observed to expected ratio (O:E) across the included clusters. MAIN OUTCOME MEASURES: 30 day mortality or in-hospital mortality. RESULTS: Datasets included 27 clusters from 18 different countries and contained data on 46 914patients. The pooled estimates ranged from 0.67 to 0.80 (C statistic), 0.22 to 1.22 (calibration slope), and 0.18 to 2.59 (O:E ratio) and were prone to substantial between study heterogeneity. The 4C Mortality Score by Knight et al (pooled C statistic 0.80, 95% confidence interval 0.75 to 0.84, 95% prediction interval 0.72 to 0.86) and clinical model by Wang et al (0.77, 0.73 to 0.80, 0.63 to 0.87) had the highest discriminative ability. On average, 29% fewer deaths were observed than predicted by the 4C Mortality Score (pooled O:E 0.71, 95% confidence interval 0.45 to 1.11, 95% prediction interval 0.21 to 2.39), 35% fewer than predicted by the Wang clinical model (0.65, 0.52 to 0.82, 0.23 to 1.89), and 4% fewer than predicted by Xie et al's model (0.96, 0.59 to 1.55, 0.21 to 4.28). CONCLUSION: The prognostic value of the included models varied greatly between the data sources. Although the Knight 4C Mortality Score and Wang clinical model appeared most promising, recalibration (intercept and slope updates) is needed before implementation in routine care.


Asunto(s)
COVID-19 , Modelos Estadísticos , Análisis de Datos , Mortalidad Hospitalaria , Humanos , Pronóstico
10.
Res Synth Methods ; 12(6): 796-815, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34312994

RESUMEN

Ideally, a meta-analysis will summarize data from several unbiased studies. Here we look into the less than ideal situation in which contributing studies may be compromised by non-differential measurement error in the exposure variable. Specifically, we consider a meta-analysis for the association between a continuous outcome variable and one or more continuous exposure variables, where the associations may be quantified as regression coefficients of a linear regression model. A flexible Bayesian framework is developed which allows one to obtain appropriate point and interval estimates with varying degrees of prior knowledge about the magnitude of the measurement error. We also demonstrate how, if individual-participant data (IPD) are available, the Bayesian meta-analysis model can adjust for multiple participant-level covariates, these being measured with or without measurement error.


Asunto(s)
Teorema de Bayes , Humanos , Modelos Lineales
11.
PLoS One ; 16(4): e0250778, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33914795

RESUMEN

INTRODUCTION: Pooling (or combining) and analysing observational, longitudinal data at the individual level facilitates inference through increased sample sizes, allowing for joint estimation of study- and individual-level exposure variables, and better enabling the assessment of rare exposures and diseases. Empirical studies leveraging such methods when randomization is unethical or impractical have grown in the health sciences in recent years. The adoption of so-called "causal" methods to account for both/either measured and/or unmeasured confounders is an important addition to the methodological toolkit for understanding the distribution, progression, and consequences of infectious diseases (IDs) and interventions on IDs. In the face of the Covid-19 pandemic and in the absence of systematic randomization of exposures or interventions, the value of these methods is even more apparent. Yet to our knowledge, no studies have assessed how causal methods involving pooling individual-level, observational, longitudinal data are being applied in ID-related research. In this systematic review, we assess how these methods are used and reported in ID-related research over the last 10 years. Findings will facilitate evaluation of trends of causal methods for ID research and lead to concrete recommendations for how to apply these methods where gaps in methodological rigor are identified. METHODS AND ANALYSIS: We will apply MeSH and text terms to identify relevant studies from EBSCO (Academic Search Complete, Business Source Premier, CINAHL, EconLit with Full Text, PsychINFO), EMBASE, PubMed, and Web of Science. Eligible studies are those that apply causal methods to account for confounding when assessing the effects of an intervention or exposure on an ID-related outcome using pooled, individual-level data from 2 or more longitudinal, observational studies. Titles, abstracts, and full-text articles, will be independently screened by two reviewers using Covidence software. Discrepancies will be resolved by a third reviewer. This systematic review protocol has been registered with PROSPERO (CRD42020204104).


Asunto(s)
Enfermedades Transmisibles , Humanos , Causalidad , Enfermedades Transmisibles/epidemiología , COVID-19/epidemiología , Estudios Longitudinales , Revisiones Sistemáticas como Asunto
12.
BMJ Open ; 11(11): e052969, 2021 11 12.
Artículo en Inglés | MEDLINE | ID: mdl-34772754

RESUMEN

INTRODUCTION: Causal methods have been adopted and adapted across health disciplines, particularly for the analysis of single studies. However, the sample sizes necessary to best inform decision-making are often not attainable with single studies, making pooled individual-level data analysis invaluable for public health efforts. Researchers commonly implement causal methods prevailing in their home disciplines, and how these are selected, evaluated, implemented and reported may vary widely. To our knowledge, no article has yet evaluated trends in the implementation and reporting of causal methods in studies leveraging individual-level data pooled from several studies. We undertake this review to uncover patterns in the implementation and reporting of causal methods used across disciplines in research focused on health outcomes. We will investigate variations in methods to infer causality used across disciplines, time and geography and identify gaps in reporting of methods to inform the development of reporting standards and the conversation required to effect change. METHODS AND ANALYSIS: We will search four databases (EBSCO, Embase, PubMed, Web of Science) using a search strategy developed with librarians from three universities (Heidelberg University, Harvard University, and University of California, San Francisco). The search strategy includes terms such as 'pool*', 'harmoniz*', 'cohort*', 'observational', variations on 'individual-level data'. Four reviewers will independently screen articles using Covidence and extract data from included articles. The extracted data will be analysed descriptively in tables and graphically to reveal the pattern in methods implementation and reporting. This protocol has been registered with PROSPERO (CRD42020143148). ETHICS AND DISSEMINATION: No ethical approval was required as only publicly available data were used. The results will be submitted as a manuscript to a peer-reviewed journal, disseminated in conferences if relevant, and published as part of doctoral dissertations in Global Health at the Heidelberg University Hospital.


Asunto(s)
Atención a la Salud , Proyectos de Investigación , Causalidad , Humanos , San Francisco , Revisiones Sistemáticas como Asunto
13.
Res Synth Methods ; 11(2): 148-168, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31759339

RESUMEN

Many randomized trials evaluate an intervention effect on time-to-event outcomes. Individual participant data (IPD) from such trials can be obtained and combined in a so-called IPD meta-analysis (IPD-MA), to summarize the overall intervention effect. We performed a narrative literature review to provide an overview of methods for conducting an IPD-MA of randomized intervention studies with a time-to-event outcome. We focused on identifying good methodological practice for modeling frailty of trial participants across trials, modeling heterogeneity of intervention effects, choosing appropriate association measures, dealing with (trial differences in) censoring and follow-up times, and addressing time-varying intervention effects and effect modification (interactions).We discuss how to achieve this using parametric and semi-parametric methods, and describe how to implement these in a one-stage or two-stage IPD-MA framework. We recommend exploring heterogeneity of the effect(s) through interaction and non-linear effects. Random effects should be applied to account for residual heterogeneity of the intervention effect. We provide further recommendations, many of which specific to IPD-MA of time-to-event data from randomized trials examining an intervention effect.We illustrate several key methods in a real IPD-MA, where IPD of 1225 participants from 5 randomized clinical trials were combined to compare the effects of Carbamazepine and Valproate on the incidence of epileptic seizures.


Asunto(s)
Metaanálisis como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto , Proyectos de Investigación , Teorema de Bayes , Carbamazepina/uso terapéutico , Interpretación Estadística de Datos , Humanos , Modelos de Riesgos Proporcionales , Convulsiones/tratamiento farmacológico , Programas Informáticos , Factores de Tiempo , Ácido Valproico/uso terapéutico
14.
Clin Breast Cancer ; 20(6): e723-e748, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32665191

RESUMEN

Pathologic nipple discharge (PND) is one of the most common breast-related complaints for referral because of its supposed association with breast cancer. The aim of this network meta-analysis (NMA) was to compare the diagnostic efficacy of ultrasound, mammogram, cytology, magnetic resonance imaging (MRI), and ductoscopy in patients with PND, as well as to determine the best diagnostic strategy to assess the risk of malignancy as cause for PND. Cochrane Library, PubMed, and Embase were searched to collect relevant literature from the inception of each of the diagnostic methods until January 27, 2020. The search yielded 1472 original citations, of which 36 studies with 3764 patients were finally included for analysis. Direct and indirect comparisons were performed using an NMA approach to evaluate the combined odd ratios and to determine the surface under the cumulative ranking curves (SUCRA) of the diagnostic value of different imaging methods for the detection of breast cancer in patients with PND. Additionally, a subgroup meta-analysis comparing ductoscopy to MRI when conventional imaging was negative was also performed. According to this NMA, sensitivity for detection of malignancy in patients with PND was highest for MRI (83%), followed by ductoscopy (58%), ultrasound (50%), cytology (38%), and mammogram (22%). Specificity was highest for mammogram (93%) followed by ductoscopy (92%), cytology (90%), MRI (76%), and ultrasound (69%). Diagnostic accuracy was the highest for ductoscopy (88%), followed by cytology (82%), MRI (77%), mammogram (76%), and ultrasound (65%). Subgroup meta-analysis (comparing ductoscopy to MRI when ultrasound and mammogram were negative) showed no significant difference in sensitivity, but ductoscopy was statistically significantly better with regard to specificity and diagnostic accuracy. The results from this NMA indicate that although ultrasound and mammogram may remain low-cost useful first choices for the detection of malignancy in patients with PND, ductoscopy outperforms most imaging techniques (especially MRI) and cytology.


Asunto(s)
Neoplasias de la Mama/diagnóstico , Secreción del Pezón , Pezones/diagnóstico por imagen , Neoplasias de la Mama/economía , Neoplasias de la Mama/patología , Diagnóstico Diferencial , Endoscopía/economía , Endoscopía/estadística & datos numéricos , Femenino , Humanos , Imagen por Resonancia Magnética/economía , Imagen por Resonancia Magnética/estadística & datos numéricos , Mamografía/economía , Mamografía/estadística & datos numéricos , Metaanálisis en Red , Pezones/patología , Sensibilidad y Especificidad , Ultrasonografía Mamaria/economía , Ultrasonografía Mamaria/estadística & datos numéricos
15.
BMJ ; 369: m1328, 2020 04 07.
Artículo en Inglés | MEDLINE | ID: mdl-32265220

RESUMEN

OBJECTIVE: To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. DESIGN: Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. DATA SOURCES: PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. STUDY SELECTION: Studies that developed or validated a multivariable covid-19 related prediction model. DATA EXTRACTION: At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). RESULTS: 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. CONCLUSION: Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. SYSTEMATIC REVIEW REGISTRATION: Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. READERS' NOTE: This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.


Asunto(s)
Infecciones por Coronavirus/diagnóstico , Modelos Teóricos , Neumonía Viral/diagnóstico , COVID-19 , Coronavirus , Progresión de la Enfermedad , Hospitalización/estadística & datos numéricos , Humanos , Análisis Multivariante , Pandemias , Pronóstico
16.
Diagn Progn Res ; 3: 13, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31338426

RESUMEN

Over the past few years, evidence synthesis has become essential to investigate and improve the generalizability of medical research findings. This strategy often involves a meta-analysis to formally summarize quantities of interest, such as relative treatment effect estimates. The use of meta-analysis methods is, however, less straightforward in prognosis research because substantial variation exists in research objectives, analysis methods and the level of reported evidence. We present a gentle overview of statistical methods that can be used to summarize data of prognostic factor and prognostic model studies. We discuss how aggregate data, individual participant data, or a combination thereof can be combined through meta-analysis methods. Recent examples are provided throughout to illustrate the various methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA