Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Clin Transl Radiat Oncol ; 43: 100677, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37822705

RESUMO

Background and purpose: Head and neck cancer (HNC) patients treated with radiotherapy often suffer from radiation-induced toxicities. Normal Tissue Complication Probability (NTCP) modeling can be used to determine the probability to develop these toxicities based on patient, tumor, treatment and dose characteristics. Since the currently used NTCP models are developed using supervised methods that discard unlabeled patient data, we assessed whether the addition of unlabeled patient data by using semi-supervised modeling would gain predictive performance. Materials and methods: The semi-supervised method of self-training was compared to supervised regression methods with and without prior multiple imputation by chained equation (MICE). The models were developed for the most common toxicity outcomes in HNC patients, xerostomia (dry mouth) and dysphagia (difficulty swallowing), measured at six months after treatment, in a development cohort of 750 HNC patients. The models were externally validated in a validation cohort of 395 HNC patients. Model performance was assessed by discrimination and calibration. Results: MICE and self-training did not improve performance in terms of discrimination or calibration at external validation compared to current regression models. In addition, the relative performance of the different models did not change upon a decrease in the amount of (labeled) data available for model development. Models using ridge regression outperformed the logistic models for the dysphagia outcome. Conclusion: Since there was no apparent gain in the addition of unlabeled patient data by using the semi-supervised method of self-training or MICE, the supervised regression models would still be preferred in current NTCP modeling for HNC patients.

2.
J Clin Epidemiol ; 142: 218-229, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34798287

RESUMO

OBJECTIVES: Missing data is a common problem during the development, evaluation, and implementation of prediction models. Although machine learning (ML) methods are often said to be capable of circumventing missing data, it is unclear how these methods are used in medical research. We aim to find out if and how well prediction model studies using machine learning report on their handling of missing data. STUDY DESIGN AND SETTING: We systematically searched the literature on published papers between 2018 and 2019 about primary studies developing and/or validating clinical prediction models using any supervised ML methodology across medical fields. From the retrieved studies information about the amount and nature (e.g. missing completely at random, potential reasons for missingness) of missing data and the way they were handled were extracted. RESULTS: We identified 152 machine learning-based clinical prediction model studies. A substantial amount of these 152 papers did not report anything on missing data (n = 56/152). A majority (n = 96/152) reported details on the handling of missing data (e.g., methods used), though many of these (n = 46/96) did not report the amount of the missingness in the data. In these 96 papers the authors only sometimes reported possible reasons for missingness (n = 7/96) and information about missing data mechanisms (n = 8/96). The most common approach for handling missing data was deletion (n = 65/96), mostly via complete-case analysis (CCA) (n = 43/96). Very few studies used multiple imputation (n = 8/96) or built-in mechanisms such as surrogate splits (n = 7/96) that directly address missing data during the development, validation, or implementation of the prediction model. CONCLUSION: Though missing values are highly common in any type of medical research and certainly in the research based on routine healthcare data, a majority of the prediction model studies using machine learning does not report sufficient information on the presence and handling of missing data. Strategies in which patient data are simply omitted are unfortunately the most often used methods, even though it is generally advised against and well known that it likely causes bias and loss of analytical power in prediction model development and in the predictive accuracy estimates. Prediction model researchers should be much more aware of alternative methodologies to address missing data.


Assuntos
Aprendizado de Máquina , Modelos Estatísticos , Viés , Interpretação Estatística de Dados , Humanos , Prognóstico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...