Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Osteoarthr Cartil Open ; 5(4): 100406, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37649530

ABSTRACT

Objectives: To efficiently assess the disease-modifying potential of new osteoarthritis treatments, clinical trials need progression-enriched patient populations. To assess whether the application of machine learning results in patient selection enrichment, we developed a machine learning recruitment strategy targeting progressive patients and validated it in the IMI-APPROACH knee osteoarthritis prospective study. Design: We designed a two-stage recruitment process supported by machine learning models trained to rank candidates by the likelihood of progression. First stage models used data from pre-existing cohorts to select patients for a screening visit. The second stage model used screening data to inform the final inclusion. The effectiveness of this process was evaluated using the actual 24-month progression. Results: From 3500 candidate patients, 433 with knee osteoarthritis were screened, 297 were enrolled, and 247 completed the 2-year follow-up visit. We observed progression related to pain (P, 30%), structure (S, 13%), and combined pain and structure (P â€‹+ â€‹S, 5%), and a proportion of non-progressors (N, 52%) ∼15% lower vs an unenriched population. Our model predicted these outcomes with AUC of 0.86 [95% CI, 0.81-0.90] for pain-related progression and AUC of 0.61 [95% CI, 0.52-0.70] for structure-related progression. Progressors were ranked higher than non-progressors for P â€‹+ â€‹S (median rank 65 vs 143, AUC = 0.75), P (median rank 77 vs 143, AUC = 0.71), and S patients (median rank 107 vs 143, AUC = 0.57). Conclusions: The machine learning-supported recruitment resulted in enriched selection of progressive patients. Further research is needed to improve structural progression prediction and assess this strategy in an interventional trial.

2.
Emerg Themes Epidemiol ; 20(1): 1, 2023 Feb 16.
Article in English | MEDLINE | ID: mdl-36797732

ABSTRACT

Low and middle-income countries continue to use Verbal autopsies (VAs) as a World Health Organisation-recommended method to ascertain causes of death in settings where coverage of vital registration systems is not yet comprehensive. Whilst the adoption of VA has resulted in major improvements in estimating cause-specific mortality in many settings, well documented limitations have been identified relating to the standardisation of the processes involved. The WHO has invested significant resources into addressing concerns in some of these areas; there however remains enduring challenges particularly in operationalising VA surveys for deaths amongst women and children, challenges which have measurable impacts on the quality of data collected and on the accuracy of determining the final cause of death. In this paper we describe some of our key experiences and recommendations in conducting VAs from over two decades of evaluating seminal trials of maternal and child health interventions in rural Ghana. We focus on challenges along the entire VA pathway that can impact on the success rates of ascertaining the final cause of death, and lessons we have learned to optimise the procedures. We highlight our experiences of the value of the open history narratives in VAs and the training and skills required to optimise the quality of the information collected. We describe key issues in methods for ascertaining cause of death and argue that both automated and physician-based methods can be valid depending on the setting. We further summarise how increasingly popular information technology methods may be used to facilitate the processes described. Verbal autopsy is a vital means of increasing the coverage of accurate mortality statistics in low- and middle-income settings, however operationalisation remains problematic. The lessons we share here in conducting VAs within a long-term surveillance system in Ghana will be applicable to researchers and policymakers in many similar settings.

3.
Front Big Data ; 4: 613047, 2021.
Article in English | MEDLINE | ID: mdl-34124650

ABSTRACT

Alzheimer's disease (AD) has its onset many decades before dementia develops, and work is ongoing to characterise individuals at risk of decline on the basis of early detection through biomarker and cognitive testing as well as the presence/absence of identified risk factors. Risk prediction models for AD based on various computational approaches, including machine learning, are being developed with promising results. However, these approaches have been criticised as they are unable to generalise due to over-reliance on one data source, poor internal and external validations, and lack of understanding of prediction models, thereby limiting the clinical utility of these prediction models. We propose a framework that employs a transfer-learning paradigm with ensemble learning algorithms to develop explainable personalised risk prediction models for dementia. Our prediction models, known as source models, are initially trained and tested using a publicly available dataset (n = 84,856, mean age = 69 years) with 14 years of follow-up samples to predict the individual risk of developing dementia. The decision boundaries of the best source model are further updated by using an alternative dataset from a different and much younger population (n = 473, mean age = 52 years) to obtain an additional prediction model known as the target model. We further apply the SHapely Additive exPlanation (SHAP) algorithm to visualise the risk factors responsible for the prediction at both population and individual levels. The best source model achieves a geometric accuracy of 87%, specificity of 99%, and sensitivity of 76%. In comparison to a baseline model, our target model achieves better performance across several performance metrics, within an increase in geometric accuracy of 16.9%, specificity of 2.7%, and sensitivity of 19.1%, an area under the receiver operating curve (AUROC) of 11% and a transfer learning efficacy rate of 20.6%. The strength of our approach is the large sample size used in training the source model, transferring and applying the "knowledge" to another dataset from a different and undiagnosed population for the early detection and prediction of dementia risk, and the ability to visualise the interaction of the risk factors that drive the prediction. This approach has direct clinical utility.

5.
Alzheimers Dement (N Y) ; 5: 563-569, 2019.
Article in English | MEDLINE | ID: mdl-31646170

ABSTRACT

INTRODUCTION: Numerous dementia risk prediction models have been developed in the past decade. However, methodological limitations of the analytical tools used may hamper their ability to generate reliable dementia risk scores. We aim to review the used methodologies. METHODS: We systematically reviewed the literature from March 2014 to September 2018 for publications presenting a dementia risk prediction model. We critically discuss the analytical techniques used in the literature. RESULTS: In total 137 publications were included in the qualitative synthesis. Three techniques were identified as the most commonly used methodologies: machine learning, logistic regression, and Cox regression. DISCUSSION: We identified three major methodological weaknesses: (1) over-reliance on one data source, (2) poor verification of statistical assumptions of Cox and logistic regression, and (3) lack of validation. The use of larger and more diverse data sets is recommended. Assumptions should be tested thoroughly, and actions should be taken if deviations are detected.

SELECTION OF CITATIONS
SEARCH DETAIL
...