Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
JMIR Med Inform ; 11: e43847, 2023 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-36943344

RESUMO

BACKGROUND: Increasing digitalization in the medical domain gives rise to large amounts of health care data, which has the potential to expand clinical knowledge and transform patient care if leveraged through artificial intelligence (AI). Yet, big data and AI oftentimes cannot unlock their full potential at scale, owing to nonstandardized data formats, lack of technical and semantic data interoperability, and limited cooperation between stakeholders in the health care system. Despite the existence of standardized data formats for the medical domain, such as Fast Healthcare Interoperability Resources (FHIR), their prevalence and usability for AI remain limited. OBJECTIVE: In this paper, we developed a data harmonization pipeline (DHP) for clinical data sets relying on the common FHIR data standard. METHODS: We validated the performance and usability of our FHIR-DHP with data from the Medical Information Mart for Intensive Care IV database. RESULTS: We present the FHIR-DHP workflow in respect of the transformation of "raw" hospital records into a harmonized, AI-friendly data representation. The pipeline consists of the following 5 key preprocessing steps: querying of data from hospital database, FHIR mapping, syntactic validation, transfer of harmonized data into the patient-model database, and export of data in an AI-friendly format for further medical applications. A detailed example of FHIR-DHP execution was presented for clinical diagnoses records. CONCLUSIONS: Our approach enables the scalable and needs-driven data modeling of large and heterogenous clinical data sets. The FHIR-DHP is a pivotal step toward increasing cooperation, interoperability, and quality of patient care in the clinical routine and for medical research.

2.
Neuroimage Clin ; 37: 103320, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36623349

RESUMO

INTRODUCTION: Dementia syndromes can be difficult to diagnose. We aimed at building a classifier for multiple dementia syndromes using magnetic resonance imaging (MRI). METHODS: Atlas-based volumetry was performed on T1-weighted MRI data of 426 patients and 51 controls from the multi-centric German Research Consortium of Frontotemporal Lobar Degeneration including patients with behavioral variant frontotemporal dementia, Alzheimer's disease, the three subtypes of primary progressive aphasia, i.e., semantic, logopenic and nonfluent-agrammatic variant, and the atypical parkinsonian syndromes progressive supranuclear palsy and corticobasal syndrome. Support vector machine classification was used to classify each patient group against controls (binary classification) and all seven diagnostic groups against each other in a multi-syndrome classifier (multiclass classification). RESULTS: The binary classification models reached high prediction accuracies between 71 and 95% with a chance level of 50%. Feature importance reflected disease-specific atrophy patterns. The multi-syndrome model reached accuracies of more than three times higher than chance level but was far from 100%. Multi-syndrome model performance was not homogenous across dementia syndromes, with better performance in syndromes characterized by regionally specific atrophy patterns. Whereas diseases generally could be classified vs controls more correctly with increasing severity and duration, differentiation between diseases was optimal in disease-specific windows of severity and duration. DISCUSSION: Results suggest that automated methods applied to MR imaging data can support physicians in diagnosis of dementia syndromes. It is particularly relevant for orphan diseases beside frequent syndromes such as Alzheimer's disease.


Assuntos
Doença de Alzheimer , Demência Frontotemporal , Degeneração Lobar Frontotemporal , Humanos , Doença de Alzheimer/patologia , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Imageamento por Ressonância Magnética/métodos , Degeneração Lobar Frontotemporal/patologia , Demência Frontotemporal/diagnóstico por imagem , Demência Frontotemporal/patologia , Síndrome , Atrofia/diagnóstico por imagem , Atrofia/patologia
3.
Alzheimers Res Ther ; 14(1): 62, 2022 05 03.
Artigo em Inglês | MEDLINE | ID: mdl-35505442

RESUMO

IMPORTANCE: The entry of artificial intelligence into medicine is pending. Several methods have been used for the predictions of structured neuroimaging data, yet nobody compared them in this context. OBJECTIVE: Multi-class prediction is key for building computational aid systems for differential diagnosis. We compared support vector machine, random forest, gradient boosting, and deep feed-forward neural networks for the classification of different neurodegenerative syndromes based on structural magnetic resonance imaging. DESIGN, SETTING, AND PARTICIPANTS: Atlas-based volumetry was performed on multi-centric T1-weighted MRI data from 940 subjects, i.e., 124 healthy controls and 816 patients with ten different neurodegenerative diseases, leading to a multi-diagnostic multi-class classification task with eleven different classes. INTERVENTIONS: N.A. MAIN OUTCOMES AND MEASURES: Cohen's kappa, accuracy, and F1-score to assess model performance. RESULTS: Overall, the neural network produced both the best performance measures and the most robust results. The smaller classes however were better classified by either the ensemble learning methods or the support vector machine, while performance measures for small classes were comparatively low, as expected. Diseases with regionally specific and pronounced atrophy patterns were generally better classified than diseases with widespread and rather weak atrophy. CONCLUSIONS AND RELEVANCE: Our study furthermore underlines the necessity of larger data sets but also calls for a careful consideration of different machine learning methods that can handle the type of data and the classification task best.


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Algoritmos , Atrofia , Humanos , Síndrome
4.
PLoS One ; 16(11): e0256585, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34780493

RESUMO

Risk stratification and treatment decisions for leukemia patients are regularly based on clinical markers determined at diagnosis, while measurements on system dynamics are often neglected. However, there is increasing evidence that linking quantitative time-course information to disease outcomes can improve the predictions for patient-specific treatment responses. We designed a synthetic experiment simulating response kinetics of 5,000 patients to compare different computational methods with respect to their ability to accurately predict relapse for chronic and acute myeloid leukemia treatment. Technically, we used clinical reference data to first fit a model and then generate de novo model simulations of individual patients' time courses for which we can systematically tune data quality (i.e. measurement error) and quantity (i.e. number of measurements). Based hereon, we compared the prediction accuracy of three different computational methods, namely mechanistic models, generalized linear models, and deep neural networks that have been fitted to the reference data. Reaching prediction accuracies between 60 and close to 100%, our results indicate that data quality has a higher impact on prediction accuracy than the specific choice of the particular method. We further show that adapted treatment and measurement schemes can considerably improve the prediction accuracy by 10 to 20%. Our proof-of-principle study highlights how computational methods and optimized data acquisition strategies can improve risk assessment and treatment of leukemia patients.


Assuntos
Simulação por Computador , Leucemia Mielogênica Crônica BCR-ABL Positiva/diagnóstico , Leucemia Mieloide Aguda/diagnóstico , Redes Neurais de Computação , Humanos , Recidiva
5.
Sci Rep ; 10(1): 10712, 2020 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-32612129

RESUMO

Machine learning has considerably improved medical image analysis in the past years. Although data-driven approaches are intrinsically adaptive and thus, generic, they often do not perform the same way on data from different imaging modalities. In particular computed tomography (CT) data poses many challenges to medical image segmentation based on convolutional neural networks (CNNs), mostly due to the broad dynamic range of intensities and the varying number of recorded slices of CT volumes. In this paper, we address these issues with a framework that adds domain-specific data preprocessing and augmentation to state-of-the-art CNN architectures. Our major focus is to stabilise the prediction performance over samples as a mandatory requirement for use in automated and semi-automated workflows in the clinical environment. To validate the architecture-independent effects of our approach we compare a neural architecture based on dilated convolutions for parallel multi-scale processing (a modified Mixed-Scale Dense Network: MS-D Net) to traditional scaling operations (a modified U-Net). Finally, we show that an ensemble model combines the strengths across different individual methods. Our framework is simple to implement into existing deep learning pipelines for CT analysis. It performs well on a range of tasks such as liver and kidney segmentation, without significant differences in prediction performance on strongly differing volume sizes and varying slice thickness. Thus our framework is an essential step towards performing robust segmentation of unknown real-world samples.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA