Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Med Biol Eng Comput ; 62(8): 2389-2407, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38589723

RESUMEN

To create robust and adaptable methods for lung pneumonia diagnosis and the assessment of its severity using chest X-rays (CXR), access to well-curated, extensive datasets is crucial. Many current severity quantification approaches require resource-intensive training for optimal results. Healthcare practitioners require efficient computational tools to swiftly identify COVID-19 cases and predict the severity of the condition. In this research, we introduce a novel image augmentation scheme as well as a neural network model founded on Vision Transformers (ViT) with a small number of trainable parameters for quantifying COVID-19 severity and other lung diseases. Our method, named Vision Transformer Regressor Infection Prediction (ViTReg-IP), leverages a ViT architecture and a regression head. To assess the model's adaptability, we evaluate its performance on diverse chest radiograph datasets from various open sources. We conduct a comparative analysis against several competing deep learning methods. Our results achieved a minimum Mean Absolute Error (MAE) of 0.569 and 0.512 and a maximum Pearson Correlation Coefficient (PC) of 0.923 and 0.855 for the geographic extent score and the lung opacity score, respectively, when the CXRs from the RALO dataset were used in training. The experimental results reveal that our model delivers exceptional performance in severity quantification while maintaining robust generalizability, all with relatively modest computational requirements. The source codes used in our work are publicly available at https://github.com/bouthainas/ViTReg-IP .


Asunto(s)
COVID-19 , Pulmón , Neumonía , Índice de Severidad de la Enfermedad , Humanos , COVID-19/diagnóstico por imagen , Neumonía/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Radiografía Torácica/métodos , Aprendizaje Profundo , SARS-CoV-2 , Redes Neurales de la Computación
2.
Front Cardiovasc Med ; 9: 754609, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35369326

RESUMEN

This study proposes machine learning-based models to automatically evaluate the severity of myocardial infarction (MI) from physiological, clinical, and paraclinical features. Two types of machine learning models are investigated for the MI assessment: the classification models classify the presence of the infarct and the persistent microvascular obstruction (PMO), and the regression models quantify the Percentage of Infarcted Myocardium (PIM) of patients suspected of having an acute MI during their reception in the emergency department. The ground truth labels for these supervised models are derived from the corresponding Delayed Enhancement MRI (DE-MRI) exams and manual annotations of the myocardium and scar tissues. Experiments were conducted on 150 cases and evaluated with cross-validation. Results showed that for the MI (PMO inclusive) and the PMO (infarct exclusive), the best models obtained respectively a mean error of 0.056 and 0.012 for the quantification, and 88.67 and 77.33% for the classification accuracy of the state of the myocardium. The study of the features' importance also revealed that the troponin value had the strongest correlation to the severity of the MI among the 12 selected features. For the proposal's translational perspective, in cardiac emergencies, qualitative and quantitative analysis can be obtained prior to the achievement of MRI by relying only on conventional tests and patient features, thus, providing an objective reference for further treatment by physicians.

3.
Ann Transl Med ; 9(1): 43, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33553336

RESUMEN

BACKGROUND: This study aimed to predict the treatment outcomes in patients with diabetic macular edema (DME) after 3 monthly anti-vascular endothelial growth factor (VEGF) injections using machine learning (ML) based on pretreatment optical coherence tomography (OCT) images and clinical variables. METHODS: An ensemble ML system consisting of four deep learning (DL) models and five classical machine learning (CML) models was developed to predict the posttreatment central foveal thickness (CFT) and the best-corrected visual acuity (BCVA). A total of 363 OCT images and 7,587 clinical data records from 363 eyes were included in the training set (304 eyes) and external validation set (59 eyes). The DL models were trained using the OCT images, and the CML models were trained using the OCT images features and clinical variables. The predictive posttreatment CFT and BCVA values were compared with true outcomes obtained from the medical records. RESULTS: For CFT prediction, the mean absolute error (MAE), root mean square error (RMSE), and R2 of the best-performing model in the training set was 66.59, 93.73, and 0.71, respectively, with an area under receiver operating characteristic curve (AUC) of 0.90 for distinguishing the eyes with good anatomical response. The MAE, RMSE, and R2 was 68.08, 97.63, and 0.74, respectively, with an AUC of 0.94 in the external validation set. For BCVA prediction, the MAE, RMSE, and R2 of the best-performing model in the training set was 0.19, 0.29, and 0.60, respectively, with an AUC of 0.80 for distinguishing eyes with a good functional response. The external validation achieved a MAE, RMSE, and R2 of 0.13, 0.20, and 0.68, respectively, with an AUC of 0.81. CONCLUSIONS: Our ensemble ML system accurately predicted posttreatment CFT and BCVA after anti-VEGF injections in DME patients, and can be used to prospectively assess the efficacy of anti-VEGF therapy in DME patients.

4.
Med Image Anal ; 67: 101848, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33091740

RESUMEN

We performed a systematic review of studies focusing on the automatic prediction of the progression of mild cognitive impairment to Alzheimer's disease (AD) dementia, and a quantitative analysis of the methodological choices impacting performance. This review included 172 articles, from which 234 experiments were extracted. For each of them, we reported the used data set, the feature types, the algorithm type, performance and potential methodological issues. The impact of these characteristics on the performance was evaluated using a multivariate mixed effect linear regressions. We found that using cognitive, fluorodeoxyglucose-positron emission tomography or potentially electroencephalography and magnetoencephalography variables significantly improved predictive performance compared to not including them, whereas including other modalities, in particular T1 magnetic resonance imaging, did not show a significant effect. The good performance of cognitive assessments questions the wide use of imaging for predicting the progression to AD and advocates for exploring further fine domain-specific cognitive assessments. We also identified several methodological issues, including the absence of a test set, or its use for feature selection or parameter tuning in nearly a fourth of the papers. Other issues, found in 15% of the studies, cast doubts on the relevance of the method to clinical practice. We also highlight that short-term predictions are likely not to be better than predicting that subjects stay stable over time. These issues highlight the importance of adhering to good practices for the use of machine learning as a decision support system for the clinical practice.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Disfunción Cognitiva/diagnóstico por imagen , Progresión de la Enfermedad , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética , Tomografía de Emisión de Positrones
5.
J Korean Med Sci ; 35(47): e399, 2020 Dec 07.
Artículo en Inglés | MEDLINE | ID: mdl-33289367

RESUMEN

BACKGROUND: This paper proposes a novel method for automatically identifying sleep apnea (SA) severity based on deep learning from a short-term normal electrocardiography (ECG) signal. METHODS: A convolutional neural network (CNN) was used as an identification model and implemented using a one-dimensional convolutional, pooling, and fully connected layer. An optimal architecture is incorporated into the CNN model for the precise identification of SA severity. A total of 144 subjects were studied. The nocturnal single-lead ECG signal was collected, and the short-term normal ECG was extracted from them. The short-term normal ECG was segmented for a duration of 30 seconds and divided into two datasets for training and evaluation. The training set consists of 82,952 segments (66,360 training set, 16,592 validation set) from 117 subjects, while the test set has 20,738 segments from 27 subjects. RESULTS: F1-score of 98.0% was obtained from the test set. Mild and moderate SA can be identified with an accuracy of 99.0%. CONCLUSION: The results showed the possibility of automatically identifying SA severity based on a short-term normal ECG signal.


Asunto(s)
Aprendizaje Profundo , Electrocardiografía , Síndromes de la Apnea del Sueño/patología , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Índice de Severidad de la Enfermedad , Procesamiento de Señales Asistido por Computador , Síndromes de la Apnea del Sueño/diagnóstico
6.
Biol Psychol ; 139: 178-185, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30414994

RESUMEN

To maintain real-time interaction with a dynamically changing visual object, the brain is thought to automatically predict the next state of the object based on the pattern of its preceding changes. A behavioral phenomenon known as representational momentum (RM: forward displacement of the remembered final state of an object along its preceding change pattern) and an electrophysiological phenomenon known as visual mismatch negativity (VMMN: an event-related brain potential component that is elicited when an object suddenly deviates from its preceding change pattern) have each indicated the existence of such automatic predictive processes. However, there has been no direct investigation of whether or not these phenomena are involved in the same predictive processes. To address this issue, the present study examined the correlation between RM and VMMN by using a hybrid paradigm in which both phenomena can be measured for the rotation of a bar. The results showed that the magnitudes of RM and VMMN were positively correlated; participants who exhibited greater RM along the regular rotation of a bar tended to show greater VMMN in response to sudden reversal embedded in the regular rotation of a bar. This result provides empirical support for the hypothesis that RM and VMMN may be involved in the same automatic predictive processes. Due to the methodological limitations of a correlation analysis, this hypothesis has to be carefully tested in future studies that examine the relationship between RM and VMMN from multiple perspectives.


Asunto(s)
Anticipación Psicológica/fisiología , Corteza Cerebral/fisiología , Potenciales Evocados/fisiología , Percepción Visual/fisiología , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Adulto Joven
7.
Speech Prosody ; 2014: 130-134, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-33855126

RESUMEN

In this paper we study the relationship between acted perceptually unambiguous emotion and prosody. Unlike most contemporary approaches which base the analysis of emotion in voice solely on continuous features extracted automatically from the acoustic signal, we analyze the predictive power of discrete characterizations of intonations in the ToBI framework. The goal of our work is to test if particular discrete prosodic events provide significant discriminative power for emotion recognition. Our experiments provide strong evidence that patterns in breaks, boundary tones and type of pitch accent are highly informative of the emotional content of speech. We also present results from automatic prediction of emotion based on ToBI-derived features and compare their prediction power with state-of-the-art bag-of-frame acoustic features. Our results indicate their similar performance in the sentence-dependent emotion prediction tasks, while acoustic features are more robust for the sentence-independent tasks. Finally, we combine ToBI features and acoustic features together and further achieve modest improvements in sentence-independent emotion prediction, particularly in differentiating fear and neutral from other emotion.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA