Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
3.
J Clin Epidemiol ; 170: 111364, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38631529

RESUMO

OBJECTIVES: To develop a framework to identify and evaluate spin practices and its facilitators in studies on clinical prediction model regardless of the modeling technique. STUDY DESIGN AND SETTING: We followed a three-phase consensus process: (1) premeeting literature review to generate items to be included; (2) a series of structured meetings to provide comments discussed and exchanged viewpoints on items to be included with a panel of experienced researchers; and (3) postmeeting review on final list of items and examples to be included. Through this iterative consensus process, a framework was derived after all panel's researchers agreed. RESULTS: This consensus process involved a panel of eight researchers and resulted in SPIN-Prediction Models which consists of two categories of spin (misleading interpretation and misleading transportability), and within these categories, two forms of spin (spin practices and facilitators of spin). We provide criteria and examples. CONCLUSION: We proposed this guidance aiming to facilitate not only the accurate reporting but also an accurate interpretation and extrapolation of clinical prediction models which will likely improve the reporting quality of subsequent research, as well as reduce research waste.


Assuntos
Consenso , Humanos , Projetos de Pesquisa/normas , Modelos Estatísticos
4.
Bone Joint J ; 106-B(4): 387-393, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38555933

RESUMO

Aims: There is a lack of published evidence relating to the rate of nonunion seen in occult scaphoid fractures, diagnosed only after MRI. This study reports the rate of delayed union and nonunion in a cohort of patients with MRI-detected acute scaphoid fractures. Methods: This multicentre cohort study at eight centres in the UK included all patients with an acute scaphoid fracture diagnosed on MRI having presented acutely following wrist trauma with normal radiographs. Data were gathered retrospectively for a minimum of 12 months at each centre. The primary outcome measures were the rate of acute surgery, delayed union, and nonunion. Results: A total of 1,989 patients underwent acute MRI for a suspected scaphoid fracture during the study period, of which 256 patients (12.9%) were diagnosed with a previously occult scaphoid fracture. Of the patients with scaphoid fractures, six underwent early surgical fixation (2.3%) and there was a total of 16 cases of delayed or nonunion (6.3%) in the remaining 250 patients treated with cast immobilization. Of the nine nonunions (3.5%), seven underwent surgery (2.7%), one opted for non-surgical treatment, and one failed to attend follow-up. Of the seven delayed unions (2.7%), one (0.4%) was treated with surgery at two months, one (0.4%) did not attend further follow-up, and the remaining five fractures (1.9%) healed after further cast immobilization. All fractures treated with surgery had united at follow-up. There was one complication of surgery (prominent screw requiring removal). Conclusion: MRI-detected scaphoid fractures are not universally benign, with delayed or nonunion of scaphoid fractures diagnosed only after MRI seen in over 6% despite appropriate initial immobilization, with most of these patients with nonunion requiring surgery to achieve union. This study adds weight to the evidence base supporting the use of early MRI for these patients.


Assuntos
Fraturas Ósseas , Fraturas Fechadas , Fraturas não Consolidadas , Traumatismos da Mão , Osso Escafoide , Traumatismos do Punho , Humanos , Fraturas Ósseas/cirurgia , Estudos Retrospectivos , Estudos de Coortes , Osso Escafoide/lesões , Traumatismos do Punho/diagnóstico por imagem , Traumatismos do Punho/cirurgia , Fixação Interna de Fraturas/efeitos adversos , Fraturas Fechadas/diagnóstico por imagem , Fraturas Fechadas/etiologia , Imageamento por Ressonância Magnética , Traumatismos da Mão/complicações , Fraturas não Consolidadas/diagnóstico por imagem , Fraturas não Consolidadas/cirurgia , Fraturas não Consolidadas/complicações
5.
J Clin Epidemiol ; 169: 111287, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38387617

RESUMO

BACKGROUND AND OBJECTIVE: Protocols are invaluable documents for any research study, especially for prediction model studies. However, the mere existence of a protocol is insufficient if key details are omitted. We reviewed the reporting content and details of the proposed design and methods reported in published protocols for prediction model research. METHODS: We searched MEDLINE, Embase, and the Web of Science Core Collection for protocols for studies developing or validating a diagnostic or prognostic model using any modeling approach in any clinical area. We screened protocols published between Jan 1, 2022 and June 30, 2022. We used the abstract, introduction, methods, and discussion sections of The Transparent Reporting of a multivariable prediction model of Individual Prognosis Or Diagnosis (TRIPOD) statement to inform data extraction. RESULTS: We identified 30 protocols, of which 28 were describing plans for model development and six for model validation. All protocols were open access, including a preprint. 15 protocols reported prospectively collecting data. 21 protocols planned to use clustered data, of which one-third planned methods to account for it. A planned sample size was reported for 93% development and 67% validation analyses. 16 protocols reported details of study registration, but all protocols reported a statement on ethics approval. Plans for data sharing were reported in 13 protocols. CONCLUSION: Protocols for prediction model studies are uncommon, and few are made publicly available. Those that are available were reasonably well-reported and often described their methods following current prediction model research recommendations, likely leading to better reporting and methods in the actual study.


Assuntos
Fidelidade a Diretrizes , Humanos , Fidelidade a Diretrizes/estatística & dados numéricos , Projetos de Pesquisa/normas , Modelos Estatísticos
6.
BMJ Med ; 3(1): e000817, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38375077

RESUMO

Objectives: To conduct a systematic review of studies externally validating the ADNEX (Assessment of Different Neoplasias in the adnexa) model for diagnosis of ovarian cancer and to present a meta-analysis of its performance. Design: Systematic review and meta-analysis of external validation studies. Data sources: Medline, Embase, Web of Science, Scopus, and Europe PMC, from 15 October 2014 to 15 May 2023. Eligibility criteria for selecting studies: All external validation studies of the performance of ADNEX, with any study design and any study population of patients with an adnexal mass. Two independent reviewers extracted the data. Disagreements were resolved by discussion. Reporting quality of the studies was scored with the TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) reporting guideline, and methodological conduct and risk of bias with PROBAST (Prediction model Risk Of Bias Assessment Tool). Random effects meta-analysis of the area under the receiver operating characteristic curve (AUC), sensitivity and specificity at the 10% risk of malignancy threshold, and net benefit and relative utility at the 10% risk of malignancy threshold were performed. Results: 47 studies (17 007 tumours) were included, with a median study sample size of 261 (range 24-4905). On average, 61% of TRIPOD items were reported. Handling of missing data, justification of sample size, and model calibration were rarely described. 91% of validations were at high risk of bias, mainly because of the unexplained exclusion of incomplete cases, small sample size, or no assessment of calibration. The summary AUC to distinguish benign from malignant tumours in patients who underwent surgery was 0.93 (95% confidence interval 0.92 to 0.94, 95% prediction interval 0.85 to 0.98) for ADNEX with the serum biomarker, cancer antigen 125 (CA125), as a predictor (9202 tumours, 43 centres, 18 countries, and 21 studies) and 0.93 (95% confidence interval 0.91 to 0.94, 95% prediction interval 0.85 to 0.98) for ADNEX without CA125 (6309 tumours, 31 centres, 13 countries, and 12 studies). The estimated probability that the model has use clinically in a new centre was 95% (with CA125) and 91% (without CA125). When restricting analysis to studies with a low risk of bias, summary AUC values were 0.93 (with CA125) and 0.91 (without CA125), and estimated probabilities that the model has use clinically were 89% (with CA125) and 87% (without CA125). Conclusions: The results of the meta-analysis indicated that ADNEX performed well in distinguishing between benign and malignant tumours in populations from different countries and settings, regardless of whether the serum biomarker, CA125, was used as a predictor. A key limitation was that calibration was rarely assessed. Systematic review registration: PROSPERO CRD42022373182.

9.
J Clin Epidemiol ; 165: 111199, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37898461

RESUMO

OBJECTIVE: To describe the frequency of open science practices in a contemporary sample of studies developing prognostic models using machine learning methods in the field of oncology. STUDY DESIGN AND SETTING: We conducted a systematic review, searching the MEDLINE database between December 1, 2022, and December 31, 2022, for studies developing a multivariable prognostic model using machine learning methods (as defined by the authors) in oncology. Two authors independently screened records and extracted open science practices. RESULTS: We identified 46 publications describing the development of a multivariable prognostic model. The adoption of open science principles was poor. Only one study reported availability of a study protocol, and only one study was registered. Funding statements and conflicts of interest statements were common. Thirty-five studies (76%) provided data sharing statements, with 21 (46%) indicating data were available on request to the authors and seven declaring data sharing was not applicable. Two studies (4%) shared data. Only 12 studies (26%) provided code sharing statements, including 2 (4%) that indicated the code was available on request to the authors. Only 11 studies (24%) provided sufficient information to allow their model to be used in practice. The use of reporting guidelines was rare: eight studies (18%) mentioning using a reporting guideline, with 4 (10%) using the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis Or Diagnosis statement, 1 (2%) using Minimum Information About Clinical Artificial Intelligence Modeling and Consolidated Standards Of Reporting Trials-Artificial Intelligence, 1 (2%) using Strengthening The Reporting Of Observational Studies In Epidemiology, 1 (2%) using Standards for Reporting Diagnostic Accuracy Studies, and 1 (2%) using Transparent Reporting of Evaluations with Nonrandomized Designs. CONCLUSION: The adoption of open science principles in oncology studies developing prognostic models using machine learning methods is poor. Guidance and an increased awareness of benefits and best practices of open science are needed for prediction research in oncology.


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Humanos , Prognóstico
10.
J Clin Epidemiol ; 165: 111206, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37925059

RESUMO

OBJECTIVES: Risk of bias assessments are important in meta-analyses of both aggregate and individual participant data (IPD). There is limited evidence on whether and how risk of bias of included studies or datasets in IPD meta-analyses (IPDMAs) is assessed. We review how risk of bias is currently assessed, reported, and incorporated in IPDMAs of test accuracy and clinical prediction model studies and provide recommendations for improvement. STUDY DESIGN AND SETTING: We searched PubMed (January 2018-May 2020) to identify IPDMAs of test accuracy and prediction models, then elicited whether each IPDMA assessed risk of bias of included studies and, if so, how assessments were reported and subsequently incorporated into the IPDMAs. RESULTS: Forty-nine IPDMAs were included. Nineteen of 27 (70%) test accuracy IPDMAs assessed risk of bias, compared to 5 of 22 (23%) prediction model IPDMAs. Seventeen of 19 (89%) test accuracy IPDMAs used Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), but no tool was used consistently among prediction model IPDMAs. Of IPDMAs assessing risk of bias, 7 (37%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided details on the information sources (e.g., the original manuscript, IPD, primary investigators) used to inform judgments, and 4 (21%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided information or whether assessments were done before or after obtaining the IPD of the included studies or datasets. Of all included IPDMAs, only seven test accuracy IPDMAs (26%) and one prediction model IPDMA (5%) incorporated risk of bias assessments into their meta-analyses. For future IPDMA projects, we provide guidance on how to adapt tools such as Prediction model Risk Of Bias ASsessment Tool (for prediction models) and QUADAS-2 (for test accuracy) to assess risk of bias of included primary studies and their IPD. CONCLUSION: Risk of bias assessments and their reporting need to be improved in IPDMAs of test accuracy and, especially, prediction model studies. Using recommended tools, both before and after IPD are obtained, will address this.


Assuntos
Confiabilidade dos Dados , Modelos Estatísticos , Humanos , Prognóstico , Viés
11.
BMC Med ; 21(1): 502, 2023 12 18.
Artigo em Inglês | MEDLINE | ID: mdl-38110939

RESUMO

BACKGROUND: Each year, thousands of clinical prediction models are developed to make predictions (e.g. estimated risk) to inform individual diagnosis and prognosis in healthcare. However, most are not reliable for use in clinical practice. MAIN BODY: We discuss how the creation of a prediction model (e.g. using regression or machine learning methods) is dependent on the sample and size of data used to develop it-were a different sample of the same size used from the same overarching population, the developed model could be very different even when the same model development methods are used. In other words, for each model created, there exists a multiverse of other potential models for that sample size and, crucially, an individual's predicted value (e.g. estimated risk) may vary greatly across this multiverse. The more an individual's prediction varies across the multiverse, the greater the instability. We show how small development datasets lead to more different models in the multiverse, often with vastly unstable individual predictions, and explain how this can be exposed by using bootstrapping and presenting instability plots. We recommend healthcare researchers seek to use large model development datasets to reduce instability concerns. This is especially important to ensure reliability across subgroups and improve model fairness in practice. CONCLUSIONS: Instability is concerning as an individual's predicted value is used to guide their counselling, resource prioritisation, and clinical decision making. If different samples lead to different models with very different predictions for the same individual, then this should cast doubt into using a particular model for that individual. Therefore, visualising, quantifying and reporting the instability in individual-level predictions is essential when proposing a new model.


Assuntos
Modelos Estatísticos , Humanos , Prognóstico , Reprodutibilidade dos Testes
12.
J Orthop Sports Phys Ther ; 53(12): 1-13, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37860866

RESUMO

OBJECTIVE: To investigate open science practices in research published in the top 5 sports medicine journals from May 1, 2022, and October 1, 2022. DESIGN: A meta-research systematic review. LITERATURE SEARCH: Open science practices were searched in MEDLINE. STUDY SELECTION CRITERIA: We included original scientific research published in one of the identified top 5 sports medicine journals in 2022 as ranked by Clarivate: (1) British Journal of Sports Medicine, (2) Journal of Sport and Health Science, (3) American Journal of Sports Medicine, (4) Medicine and Science in Sports and Exercise, and (5) Sports Medicine-Open. Studies were excluded if they were systematic reviews, qualitative research, gray literature, or animal or cadaver models. DATA SYNTHESIS: Open science practices were extracted in accordance with the Transparency and Openness Promotion guidelines and patient and public involvement. RESULTS: Two hundred forty-three studies were included. The median number of open science practices in each study was 2, out of a maximum of 12 (range: 0-8; interquartile range: 2). Two hundred thirty-four studies (96%, 95% confidence interval [CI]: 94%-99%) provided an author conflict-of-interest statement and 163 (67%, 95% CI: 62%-73%) reported funding. Twenty-one studies (9%, 95% CI: 5%-12%) provided open-access data. Fifty-four studies (22%, 95% CI: 17%-27%) included a data availability statement and 3 (1%, 95% CI: 0%-3%) made code available. Seventy-six studies (32%, 95% CI: 25%-37%) had transparent materials and 30 (12%, 95% CI: 8%-16%) used a reporting guideline. Twenty-eight studies (12%, 95% CI: 8%-16%) were preregistered. Six studies (3%, 95% CI: 1%-4%) published a protocol. Four studies (2%, 95% CI: 0%-3%) reported an analysis plan a priori. Seven studies (3%, 95% CI: 1%-5%) reported patient and public involvement. CONCLUSION: Open science practices in the sports medicine field are extremely limited. The least followed practices were sharing code, data, and analysis plans. J Orthop Sports Phys Ther 2023;53(12):1-13. Epub 20 October 2023. doi:10.2519/jospt.2023.12016.


Assuntos
Exercício Físico , Medicina Esportiva , Humanos , Confidencialidade
14.
BMC Med Res Methodol ; 23(1): 188, 2023 08 19.
Artigo em Inglês | MEDLINE | ID: mdl-37598153

RESUMO

BACKGROUND: Having an appropriate sample size is important when developing a clinical prediction model. We aimed to review how sample size is considered in studies developing a prediction model for a binary outcome. METHODS: We searched PubMed for studies published between 01/07/2020 and 30/07/2020 and reviewed the sample size calculations used to develop the prediction models. Using the available information, we calculated the minimum sample size that would be needed to estimate overall risk and minimise overfitting in each study and summarised the difference between the calculated and used sample size. RESULTS: A total of 119 studies were included, of which nine studies provided sample size justification (8%). The recommended minimum sample size could be calculated for 94 studies: 73% (95% CI: 63-82%) used sample sizes lower than required to estimate overall risk and minimise overfitting including 26% studies that used sample sizes lower than required to estimate overall risk only. A similar number of studies did not meet the ≥ 10EPV criteria (75%, 95% CI: 66-84%). The median deficit of the number of events used to develop a model was 75 [IQR: 234 lower to 7 higher]) which reduced to 63 if the total available data (before any data splitting) was used [IQR:225 lower to 7 higher]. Studies that met the minimum required sample size had a median c-statistic of 0.84 (IQR:0.80 to 0.9) and studies where the minimum sample size was not met had a median c-statistic of 0.83 (IQR: 0.75 to 0.9). Studies that met the ≥ 10 EPP criteria had a median c-statistic of 0.80 (IQR: 0.73 to 0.84). CONCLUSIONS: Prediction models are often developed with no sample size calculation, as a consequence many are too small to precisely estimate the overall risk. We encourage researchers to justify, perform and report sample size calculations when developing a prediction model.


Assuntos
Modelos Estatísticos , Pesquisadores , Humanos , Prognóstico , PubMed
15.
J Clin Epidemiol ; 161: 140-151, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37536504

RESUMO

BACKGROUND AND OBJECTIVES: When developing a clinical prediction model, assuming a linear relationship between the continuous predictors and outcome is not recommended. Incorrect specification of the functional form of continuous predictors could reduce predictive accuracy. We examine how continuous predictors are handled in studies developing a clinical prediction model. METHODS: We searched PubMed for clinical prediction model studies developing a logistic regression model for a binary outcome, published between July 01, 2020, and July 30, 2020. RESULTS: In total, 118 studies were included in the review (18 studies (15%) assessed the linearity assumption or used methods to handle nonlinearity, and 100 studies (85%) did not). Transformation and splines were commonly used to handle nonlinearity, used in 7 (n = 7/18, 39%) and 6 (n = 6/18, 33%) studies, respectively. Categorization was most often used method to handle continuous predictors (n = 67/118, 56.8%) where most studies used dichotomization (n = 40/67, 60%). Only ten models included nonlinear terms in the final model (n = 10/18, 56%). CONCLUSION: Though widely recommended not to categorize continuous predictors or assume a linear relationship between outcome and continuous predictors, most studies categorize continuous predictors, few studies assess the linearity assumption, and even fewer use methodology to account for nonlinearity. Methodological guidance is provided to guide researchers on how to handle continuous predictors when developing a clinical prediction model.


Assuntos
Modelos Estatísticos , Humanos , Modelos Logísticos , Prognóstico
16.
BJR Open ; 5(1): 20220033, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37389003

RESUMO

Objective: This study aimed to describe the methodologies used to develop and evaluate models that use artificial intelligence (AI) to analyse lung images in order to detect, segment (outline borders of), or classify pulmonary nodules as benign or malignant. Methods: In October 2019, we systematically searched the literature for original studies published between 2018 and 2019 that described prediction models using AI to evaluate human pulmonary nodules on diagnostic chest images. Two evaluators independently extracted information from studies, such as study aims, sample size, AI type, patient characteristics, and performance. We summarised data descriptively. Results: The review included 153 studies: 136 (89%) development-only studies, 12 (8%) development and validation, and 5 (3%) validation-only. CT scans were the most common type of image type used (83%), often acquired from public databases (58%). Eight studies (5%) compared model outputs with biopsy results. 41 studies (26.8%) reported patient characteristics. The models were based on different units of analysis, such as patients, images, nodules, or image slices or patches. Conclusion: The methods used to develop and evaluate prediction models using AI to detect, segment, or classify pulmonary nodules in medical imaging vary, are poorly reported, and therefore difficult to evaluate. Transparent and complete reporting of methods, results and code would fill the gaps in information we observed in the study publications. Advances in knowledge: We reviewed the methodology of AI models detecting nodules on lung images and found that the models were poorly reported and had no description of patient characteristics, with just a few comparing models' outputs with biopsies results. When lung biopsy is not available, lung-RADS could help standardise the comparisons between the human radiologist and the machine. The field of radiology should not give up principles from the diagnostic accuracy studies, such as the choice for the correct ground truth, just because AI is used. Clear and complete reporting of the reference standard used would help radiologists trust in the performance that AI models claim to have. This review presents clear recommendations about the essential methodological aspects of diagnostic models that should be incorporated in studies using AI to help detect or segmentate lung nodules. The manuscript also reinforces the need for more complete and transparent reporting, which can be helped using the recommended reporting guidelines.

17.
JAMA Netw Open ; 6(6): e2317651, 2023 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-37294569

RESUMO

Importance: Numerous studies have shown that adherence to reporting guidelines is suboptimal. Objective: To evaluate whether asking peer reviewers to check if specific reporting guideline items were adequately reported would improve adherence to reporting guidelines in published articles. Design, Setting, and Participants: Two parallel-group, superiority randomized trials were performed using manuscripts submitted to 7 biomedical journals (5 from the BMJ Publishing Group and 2 from the Public Library of Science) as the unit of randomization, with peer reviewers allocated to the intervention or control group. Interventions: The first trial (CONSORT-PR) focused on manuscripts that presented randomized clinical trial (RCT) results and reported following the Consolidated Standards of Reporting Trials (CONSORT) guideline, and the second trial (SPIRIT-PR) focused on manuscripts that presented RCT protocols and reported following the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) guideline. The CONSORT-PR trial included manuscripts that described RCT primary results (submitted July 2019 to July 2021). The SPIRIT-PR trial included manuscripts that contained RCT protocols (submitted June 2020 to May 2021). Manuscripts in both trials were randomized (1:1) to the intervention or control group; the control group received usual journal practice. In the intervention group of both trials, peer reviewers received an email from the journal that asked them to check whether the 10 most important and poorly reported CONSORT (for CONSORT-PR) or SPIRIT (for SPIRIT-PR) items were adequately reported in the manuscript. Peer reviewers and authors were not informed of the purpose of the study, and outcome assessors were blinded. Main Outcomes and Measures: The difference in the mean proportion of adequately reported 10 CONSORT or SPIRIT items between the intervention and control groups in published articles. Results: In the CONSORT-PR trial, 510 manuscripts were randomized. Of those, 243 were published (122 in the intervention group and 121 in the control group). A mean proportion of 69.3% (95% CI, 66.0%-72.7%) of the 10 CONSORT items were adequately reported in the intervention group and 66.6% (95% CI, 62.5%-70.7%) in the control group (mean difference, 2.7%; 95% CI, -2.6% to 8.0%). In the SPIRIT-PR trial, of the 244 randomized manuscripts, 178 were published (90 in the intervention group and 88 in the control group). A mean proportion of 46.1% (95% CI, 41.8%-50.4%) of the 10 SPIRIT items were adequately reported in the intervention group and 45.6% (95% CI, 41.7% to 49.4%) in the control group (mean difference, 0.5%; 95% CI, -5.2% to 6.3%). Conclusions and Relevance: These 2 randomized trials found that it was not useful to implement the tested intervention to increase reporting completeness in published articles. Other interventions should be assessed and considered in the future. Trial Registration: ClinicalTrials.gov Identifiers: NCT05820971 (CONSORT-PR) and NCT05820984 (SPIRIT-PR).


Assuntos
Publicações , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Padrões de Referência , Grupos Controle
18.
Sports Med ; 53(10): 1841-1849, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37160562

RESUMO

Clinical prediction models in sports medicine that utilize regression or machine learning techniques have become more widely published, used, and disseminated. However, these models are typically characterized by poor methodology and incomplete reporting, and an inadequate evaluation of performance, leading to unreliable predictions and weak clinical utility within their intended sport population. Before implementation in practice, models require a thorough evaluation. Strong replicable methods and transparency reporting allow practitioners and researchers to make independent judgments as to the model's validity, performance, clinical usefulness, and confidence it will do no harm. However, this is not reflected in the sports medicine literature. As shown in a recent systematic review of models for predicting sports injury models, most were typically characterized by poor methodology, incomplete reporting, and inadequate performance evaluation. Because of constraints imposed by data from individual teams, the development of accurate, reliable, and useful models is highly reliant on external validation. However, a barrier to collaboration is a desire to maintain a competitive advantage; a team's proprietary information is often perceived as high value, and so these 'trade secrets' are frequently guarded. These 'trade secrets' also apply to commercially available models, as developers are unwilling to share proprietary (and potentially profitable) development and validation information. In this Current Opinion, we: (1) argue that open science is essential for improving sport prediction models and (2) critically examine sport prediction models for open science practices.


Assuntos
Traumatismos em Atletas , Medicina Esportiva , Esportes , Humanos , Tabu , Medicina Esportiva/métodos
19.
J Clin Epidemiol ; 159: 10-30, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37156342

RESUMO

BACKGROUND: Blood transfusion can be a lifesaving intervention after perioperative blood loss. Many prediction models have been developed to identify patients most likely to require blood transfusion during elective surgery, but it is unclear whether any are suitable for clinical practice. STUDY DESIGN AND SETTING: We conducted a systematic review, searching MEDLINE, Embase, PubMed, The Cochrane Library, Transfusion Evidence Library, Scopus, and Web of Science databases for studies reporting the development or validation of a blood transfusion prediction model in elective surgery patients between January 1, 2000 and June 30, 2021. We extracted study characteristics, discrimination performance (c-statistics) of final models, and data, which we used to perform risk of bias assessment using the Prediction model risk of bias assessment tool (PROBAST). RESULTS: We reviewed 66 studies (72 developed and 48 externally validated models). Pooled c-statistics of externally validated models ranged from 0.67 to 0.78. Most developed and validated models were at high risk of bias due to handling of predictors, validation methods, and too small sample sizes. CONCLUSION: Most blood transfusion prediction models are at high risk of bias and suffer from poor reporting and methodological quality, which must be addressed before they can be safely used in clinical practice.


Assuntos
Transfusão de Sangue , Modelos Estatísticos , Humanos , Prognóstico , Transfusão de Sangue/métodos , Hemorragia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA