Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Sensors (Basel) ; 24(14)2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39066018

RESUMEN

Radar sensors, leveraging the Doppler effect, enable the nonintrusive capture of kinetic and physiological motions while preserving privacy. Deep learning (DL) facilitates radar sensing for healthcare applications such as gait recognition and vital-sign measurement. However, band-dependent patterns, indicating variations in patterns and power scales associated with frequencies in time-frequency representation (TFR), challenge radar sensing applications using DL. Frequency-dependent characteristics and features with lower power scales may be overlooked during representation learning. This paper proposes an Enhanced Band-Dependent Learning framework (E-BDL) comprising an adaptive sub-band filtering module, a representation learning module, and a sub-view contrastive module to fully detect band-dependent features in sub-frequency bands and leverage them for classification. Experimental validation is conducted on two radar datasets, including gait abnormality recognition for Alzheimer's disease (AD) and AD-related dementia (ADRD) risk evaluation and vital-sign monitoring for hemodynamics scenario classification. For hemodynamics scenario classification, E-BDL-ResNet achieves competitive performance in overall accuracy and class-wise evaluations compared to recent methods. For ADRD risk evaluation, the results demonstrate E-BDL-ResNet's superior performance across all candidate models, highlighting its potential as a clinical tool. E-BDL effectively detects salient sub-bands in TFRs, enhancing representation learning and improving the performance and interpretability of DL-based models.


Asunto(s)
Aprendizaje Profundo , Radar , Humanos , Enfermedad de Alzheimer/diagnóstico , Marcha/fisiología , Algoritmos , Hemodinámica/fisiología , Signos Vitales/fisiología
2.
IEEE Sens J ; 23(10): 10998-11006, 2023 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-37547101

RESUMEN

Abnormal gait is a significant non-cognitive biomarker for Alzheimer's disease (AD) and AD-related dementia (ADRD). Micro-Doppler radar, a non-wearable technology, can capture human gait movements for potential early ADRD risk assessment. In this research, we propose to design STRIDE integrating micro-Doppler radar sensors with advanced artificial intelligence (AI) technologies. STRIDE embeds a new deep learning (DL) classification framework. As a proof of concept, we develop a "digital-twin" of STRIDE, consisting of a human walking simulation model and a micro-Doppler radar simulation model, to generate a gait signature dataset. Taking established human walking parameters, the walking model simulates individuals with ADRD under various conditions. The radar model based on electromagnetic scattering and the Doppler frequency shift model is employed to generate micro-Doppler signatures from different moving body parts (e.g., foot, limb, joint, torso, shoulder, etc.). A band-dependent DL framework is developed to predict ADRD risks. The experimental results demonstrate the effectiveness and feasibility of STRIDE for evaluating ADRD risk.

4.
Respir Res ; 23(1): 105, 2022 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-35488261

RESUMEN

BACKGROUND: Quantitative computed tomography (QCT) analysis may serve as a tool for assessing the severity of coronavirus disease 2019 (COVID-19) and for monitoring its progress. The present study aimed to assess the association between steroid therapy and quantitative CT parameters in a longitudinal cohort with COVID-19. METHODS: Between February 7 and February 17, 2020, 72 patients with severe COVID-19 were retrospectively enrolled. All 300 chest CT scans from these patients were collected and classified into five stages according to the interval between hospital admission and follow-up CT scans: Stage 1 (at admission); Stage 2 (3-7 days); Stage 3 (8-14 days); Stage 4 (15-21 days); and Stage 5 (22-31 days). QCT was performed using a threshold-based quantitative analysis to segment the lung according to different Hounsfield unit (HU) intervals. The primary outcomes were changes in percentage of compromised lung volume (%CL, - 500 to 100 HU) at different stages. Multivariate Generalized Estimating Equations were performed after adjusting for potential confounders. RESULTS: Of 72 patients, 31 patients (43.1%) received steroid therapy. Steroid therapy was associated with a decrease in %CL (- 3.27% [95% CI, - 5.86 to - 0.68, P = 0.01]) after adjusting for duration and baseline %CL. Associations between steroid therapy and changes in %CL varied between different stages or baseline %CL (all interactions, P < 0.01). Steroid therapy was associated with decrease in %CL after stage 3 (all P < 0.05), but not at stage 2. Similarly, steroid therapy was associated with a more significant decrease in %CL in the high CL group (P < 0.05), but not in the low CL group. CONCLUSIONS: Steroid administration was independently associated with a decrease in %CL, with interaction by duration or disease severity in a longitudinal cohort. The quantitative CT parameters, particularly compromised lung volume, may provide a useful tool to monitor COVID-19 progression during the treatment process. Trial registration Clinicaltrials.gov, NCT04953247. Registered July 7, 2021, https://clinicaltrials.gov/ct2/show/NCT04953247.


Asunto(s)
Tratamiento Farmacológico de COVID-19 , Humanos , Pulmón/diagnóstico por imagen , Mediciones del Volumen Pulmonar/métodos , Estudios Retrospectivos , Esteroides/uso terapéutico
5.
Eur Radiol ; 32(4): 2235-2245, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-34988656

RESUMEN

BACKGROUND: Main challenges for COVID-19 include the lack of a rapid diagnostic test, a suitable tool to monitor and predict a patient's clinical course and an efficient way for data sharing among multicenters. We thus developed a novel artificial intelligence system based on deep learning (DL) and federated learning (FL) for the diagnosis, monitoring, and prediction of a patient's clinical course. METHODS: CT imaging derived from 6 different multicenter cohorts were used for stepwise diagnostic algorithm to diagnose COVID-19, with or without clinical data. Patients with more than 3 consecutive CT images were trained for the monitoring algorithm. FL has been applied for decentralized refinement of independently built DL models. RESULTS: A total of 1,552,988 CT slices from 4804 patients were used. The model can diagnose COVID-19 based on CT alone with the AUC being 0.98 (95% CI 0.97-0.99), and outperforms the radiologist's assessment. We have also successfully tested the incorporation of the DL diagnostic model with the FL framework. Its auto-segmentation analyses co-related well with those by radiologists and achieved a high Dice's coefficient of 0.77. It can produce a predictive curve of a patient's clinical course if serial CT assessments are available. INTERPRETATION: The system has high consistency in diagnosing COVID-19 based on CT, with or without clinical data. Alternatively, it can be implemented on a FL platform, which would potentially encourage the data sharing in the future. It also can produce an objective predictive curve of a patient's clinical course for visualization. KEY POINTS: • CoviDet could diagnose COVID-19 based on chest CT with high consistency; this outperformed the radiologist's assessment. Its auto-segmentation analyses co-related well with those by radiologists and could potentially monitor and predict a patient's clinical course if serial CT assessments are available. It can be integrated into the federated learning framework. • CoviDet can be used as an adjunct to aid clinicians with the CT diagnosis of COVID-19 and can potentially be used for disease monitoring; federated learning can potentially open opportunities for global collaboration.


Asunto(s)
Inteligencia Artificial , COVID-19 , Algoritmos , Humanos , Radiólogos , Tomografía Computarizada por Rayos X/métodos
6.
J Xray Sci Technol ; 29(1): 1-17, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33164982

RESUMEN

BACKGROUND: Accurate and rapid diagnosis of coronavirus disease (COVID-19) is crucial for timely quarantine and treatment. PURPOSE: In this study, a deep learning algorithm-based AI model using ResUNet network was developed to evaluate the performance of radiologists with and without AI assistance in distinguishing COVID-19 infected pneumonia patients from other pulmonary infections on CT scans. METHODS: For model development and validation, a total number of 694 cases with 111,066 CT slides were retrospectively collected as training data and independent test data in the study. Among them, 118 are confirmed COVID-19 infected pneumonia cases and 576 are other pulmonary infection cases (e.g. tuberculosis cases, common pneumonia cases and non-COVID-19 viral pneumonia cases). The cases were divided into training and testing datasets. The independent test was performed by evaluating and comparing the performance of three radiologists with different years of practice experience in distinguishing COVID-19 infected pneumonia cases with and without the AI assistance. RESULTS: Our final model achieved an overall test accuracy of 0.914 with an area of the receiver operating characteristic (ROC) curve (AUC) of 0.903 in which the sensitivity and specificity are 0.918 and 0.909, respectively. The deep learning-based model then achieved a comparable performance by improving the radiologists' performance in distinguish COVOD-19 from other pulmonary infections, yielding better average accuracy and sensitivity, from 0.941 to 0.951 and from 0.895 to 0.942, respectively, when compared to radiologists without using AI assistance. CONCLUSION: A deep learning algorithm-based AI model developed in this study successfully improved radiologists' performance in distinguishing COVID-19 from other pulmonary infections using chest CT images.


Asunto(s)
Inteligencia Artificial , COVID-19/diagnóstico por imagen , Radiólogos , Tomografía Computarizada por Rayos X/métodos , Adulto , Anciano , Algoritmos , Competencia Clínica/estadística & datos numéricos , Aprendizaje Profundo , Diagnóstico Diferencial , Femenino , Humanos , Pulmón/diagnóstico por imagen , Pulmón/patología , Masculino , Persona de Mediana Edad , Radiólogos/estadística & datos numéricos , Infecciones del Sistema Respiratorio/diagnóstico por imagen , SARS-CoV-2 , Sensibilidad y Especificidad , Adulto Joven
7.
J Xray Sci Technol ; 29(5): 741-762, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34397444

RESUMEN

BACKGROUND AND OBJECTIVE: Monitoring recovery process of coronavirus disease 2019 (COVID-19) patients released from hospital is crucial for exploring residual effects of COVID-19 and beneficial for clinical care. In this study, a comprehensive analysis was carried out to clarify residual effects of COVID-19 on hospital discharged patients. METHODS: Two hundred sixty-eight cases with laboratory measured data at hospital discharge record and five follow-up visits were retrospectively collected to carry out statistical data analysis comprehensively, which includes multiple statistical methods (e.g., chi-square, T-test and regression) used in this study. RESULTS: Study found that 13 of 21 hematologic parameters in laboratory measured dataset and volume ratio of right lung lesions on CT images highly associated with COVID-19. Moderate patients had statistically significant lower neutrophils than mild and severe patients after hospital discharge, which is probably caused by more efforts on severe patients and slightly neglection of moderate patients. COVID-19 has residual effects on neutrophil-to-lymphocyte ratio (NLR) of patients who have hypertension or chronic obstructive pulmonary disease (COPD). After released from hospital, female showed better performance in T lymphocytes subset cells, especially T helper lymphocyte% (16% higher than male). According to this sex-based differentiation of COVID-19, male should be recommended to take clinical test more frequently to monitor recovery of immune system. Patients over 60 years old showed unstable recovery process of immune cells (e.g., CD45 + lymphocyte) within 75 days after discharge requiring longer clinical care. Additionally, right lung was vulnerable to COVID-19 and required more time to recover than left lung. CONCLUSIONS: Criterion of hospital discharge and strategy of clinical care should be flexible in different cases due to residual effects of COVID-19, which depend on several impact factors. Revealing remaining effects of COVID-19 is an effective way to eliminate disorder of mental health caused by COVID-19 infection.


Asunto(s)
COVID-19/diagnóstico , Alta del Paciente/estadística & datos numéricos , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Biomarcadores/sangre , China , Femenino , Humanos , Estudios Longitudinales , Pulmón/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , SARS-CoV-2 , Tomografía Computarizada por Rayos X , Adulto Joven
8.
J Xray Sci Technol ; 28(3): 391-404, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32538893

RESUMEN

Recently, COVID-19 has spread in more than 100 countries and regions around the world, raising grave global concerns. COVID-19 transmits mainly through respiratory droplets and close contacts, causing cluster infections. The symptoms are dominantly fever, fatigue, and dry cough, and can be complicated with tiredness, sore throat, and headache. A few patients have symptoms such as stuffy nose, runny nose, and diarrhea. The severe disease can progress rapidly into the acute respiratory distress syndrome (ARDS). Reverse transcription polymerase chain reaction (RT-PCR) and Next-generation sequencing (NGS) are the gold standard for diagnosing COVID-19. Chest imaging is used for cross validation. Chest CT is highly recommended as the preferred imaging diagnosis method for COVID-19 due to its high density and high spatial resolution. The common CT manifestation of COVID-19 includes multiple segmental ground glass opacities (GGOs) distributed dominantly in extrapulmonary/subpleural zones and along bronchovascular bundles with crazy paving sign and interlobular septal thickening and consolidation. Pleural effusion or mediastinal lymphadenopathy is rarely seen. In CT imaging, COVID-19 manifests differently in its various stages including the early stage, the progression (consolidation) stage, and the absorption stage. In its early stage, it manifests as scattered flaky GGOs in various sizes, dominated by peripheral pulmonary zone/subpleural distributions. In the progression state, GGOs increase in number and/or size, and lung consolidations may become visible. The main manifestation in the absorption stage is interstitial change of both lungs, such as fibrous cords and reticular opacities. Differentiation between COVID-19 pneumonia and other viral pneumonias are also analyzed. Thus, CT examination can help reduce false negatives of nucleic acid tests.


Asunto(s)
Betacoronavirus/patogenicidad , Infecciones por Coronavirus/diagnóstico , Infecciones por Coronavirus/patología , Pulmón/diagnóstico por imagen , Pulmón/patología , Neumonía Viral/diagnóstico , Neumonía Viral/patología , Tomografía Computarizada por Rayos X/métodos , COVID-19 , Infecciones por Coronavirus/complicaciones , Diagnóstico Diferencial , Progresión de la Enfermedad , Humanos , Pandemias , Derrame Pleural/etiología , Derrame Pleural/patología , Neumonía Viral/complicaciones , Reacción en Cadena en Tiempo Real de la Polimerasa , SARS-CoV-2
9.
J Xray Sci Technol ; 28(5): 939-951, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32651351

RESUMEN

OBJECTIVE: Diagnosis of tuberculosis (TB) in multi-slice spiral computed tomography (CT) images is a difficult task in many TB prevalent locations in which experienced radiologists are lacking. To address this difficulty, we develop an automated detection system based on artificial intelligence (AI) in this study to simplify the diagnostic process of active tuberculosis (ATB) and improve the diagnostic accuracy using CT images. DATA: A CT image dataset of 846 patients is retrospectively collected from a large teaching hospital. The gold standard for ATB patients is sputum smear, and the gold standard for normal and pneumonia patients is the CT report result. The dataset is divided into independent training and testing data subsets. The training data contains 337 ATB, 110 pneumonia, and 120 normal cases, while the testing data contains 139 ATB, 40 pneumonia, and 100 normal cases, respectively. METHODS: A U-Net deep learning algorithm was applied for automatic detection and segmentation of ATB lesions. Image processing methods are then applied to CT layers diagnosed as ATB lesions by U-Net, which can detect potentially misdiagnosed layers, and can turn 2D ATB lesions into 3D lesions based on consecutive U-Net annotations. Finally, independent test data is used to evaluate the performance of the developed AI tool. RESULTS: For an independent test, the AI tool yields an AUC value of 0.980. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value are 0.968, 0.964, 0.971, 0.971, and 0.964, respectively, which shows that the AI tool performs well for detection of ATB and differential diagnosis of non-ATB (i.e. pneumonia and normal cases). CONCLUSION: An AI tool for automatic detection of ATB in chest CT is successfully developed in this study. The AI tool can accurately detect ATB patients, and distinguish between ATB and non- ATB cases, which simplifies the diagnosis process and lays a solid foundation for the next step of AI in CT diagnosis of ATB in clinical application.


Asunto(s)
Aprendizaje Profundo , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Tuberculosis Pulmonar/diagnóstico por imagen , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Niño , Preescolar , Femenino , Humanos , Pulmón/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Adulto Joven
10.
J Xray Sci Technol ; 28(5): 885-892, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32675436

RESUMEN

In this article, we analyze and report cases of three patients who were admitted to Renmin Hospital, Wuhan University, China, for treating COVID-19 pneumonia in February 2020 and were unresponsive to initial treatment of steroids. They were then received titrated steroids treatment based on the assessment of computed tomography (CT) images augmented and analyzed with the artificial intelligence (AI) tool and output. Three patients were finally recovered and discharged. The result indicated that sufficient steroids may be effective in treating the COVID-19 patients after frequent evaluation and timely adjustment according to the disease severity assessed based on the quantitative analysis of the images of serial CT scans.


Asunto(s)
Infecciones por Coronavirus/diagnóstico por imagen , Infecciones por Coronavirus/tratamiento farmacológico , Glucocorticoides/uso terapéutico , Neumonía Viral/diagnóstico por imagen , Neumonía Viral/tratamiento farmacológico , Tomografía Computarizada por Rayos X/métodos , Anciano , Inteligencia Artificial , Betacoronavirus , COVID-19 , China , Infecciones por Coronavirus/patología , Infecciones por Coronavirus/fisiopatología , Relación Dosis-Respuesta a Droga , Femenino , Humanos , Pulmón/diagnóstico por imagen , Pulmón/efectos de los fármacos , Pulmón/patología , Pulmón/fisiopatología , Masculino , Persona de Mediana Edad , Pandemias , Neumonía Viral/patología , Neumonía Viral/fisiopatología , Estudios Retrospectivos , SARS-CoV-2
11.
J Endod ; 2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39097163

RESUMEN

INTRODUCTION: Cone-beam computed tomography (CBCT) is widely used to detect jaw lesions, although CBCT interpretation is time-consuming and challenging. Artificial intelligence for CBCT segmentation may improve lesion detection accuracy. However, consistent automated lesion detection remains difficult, especially with limited training data. This study aimed to assess the applicability of pretrained transformer-based architectures for semantic segmentation of CBCT volumes when applied to periapical lesion detection. METHODS: CBCT volumes (n = 138) were collected and annotated by expert clinicians using 5 labels - "lesion," "restorative material," "bone," "tooth structure," and "background." U-Net (convolutional neural network-based) and Swin-UNETR (transformer-based) models, pretrained (Swin-UNETR-PRETRAIN), and from scratch (Swin-UNETR-SCRATCH), were trained with subsets of the annotated CBCTs. These models were then evaluated for semantic segmentation performance using the Sørensen-Dice coefficient (DICE), lesion detection performance using sensitivity and specificity, and training sample size requirements by comparing models trained with 20, 40, 60, or 103 samples. RESULTS: Trained with 103 samples, Swin-UNETR-PRETRAIN achieved a DICE of 0.8512 for "lesion," 0.8282 for "restorative materials," 0.9178 for "bone," 0.9029 for "tooth structure," and 0.9901 for "background." "Lesion" DICE was statistically similar between Swin-UNETR-PRETRAIN trained with 103 and 60 images (P > .05), with the latter achieving 1.00 sensitivity and 0.94 specificity in lesion detection. With small training sets, Swin-UNETR-PRETRAIN outperformed Swin-UNETR-SCRATCH in DICE over all labels (P < .001 [n = 20], P < .001 [n = 40]), and U-Net in lesion detection specificity (P = .006 [n = 20], P = .031 [n = 40]). CONCLUSIONS: Transformer-based Swin-UNETR architectures allowed for excellent semantic segmentation and periapical lesion detection. Pretrained, it may provide an alternative with smaller training datasets compared to classic U-Net architectures.

12.
J Endod ; 50(2): 220-228, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37979653

RESUMEN

INTRODUCTION: Training of Artificial Intelligence (AI) for biomedical image analysis depends on large annotated datasets. This study assessed the efficacy of Active Learning (AL) strategies training AI models for accurate multilabel segmentation and detection of periapical lesions in cone-beam CTs (CBCTs) using a limited dataset. METHODS: Limited field-of-view CBCT volumes (n = 20) were segmented by clinicians (clinician segmentation [CS]) and Bayesian U-Net-based AL strategies. Two AL functions, Bayesian Active Learning by Disagreement [BALD] and Max_Entropy [ME], were used for multilabel segmentation ("Lesion"-"Tooth Structure"-"Bone"-"Restorative Materials"-"Background"), and compared to a non-AL benchmark Bayesian U-Net function. The training-to-testing set ratio was 4:1. Comparisons between the AL and Bayesian U-Net functions versus CS were made by evaluating the segmentation accuracy with the Dice indices and lesion detection accuracy. The Kruskal-Wallis test was used to assess statistically significant differences. RESULTS: The final training set contained 26 images. After 8 AL iterations, lesion detection sensitivity was 84.0% for BALD, 76.0% for ME, and 32.0% for Bayesian U-Net, which was significantly different (P < .0001; H = 16.989). The mean Dice index for all labels was 0.680 ± 0.155 for Bayesian U-Net and 0.703 ± 0.166 for ME after eight AL iterations, compared to 0.601 ± 0.267 for Bayesian U-Net over the mean of all iterations. The Dice index for "Lesion" was 0.504 for BALD and 0.501 for ME after 8 AL iterations, and at a maximum 0.288 for Bayesian U-Net. CONCLUSIONS: Both AL strategies based on uncertainty quantification from Bayesian U-Net BALD, and ME, provided improved segmentation and lesion detection accuracy for CBCTs. AL may contribute to reducing extensive labeling needs for training AI algorithms for biomedical image analysis in dentistry.


Asunto(s)
Algoritmos , Inteligencia Artificial , Teorema de Bayes , Incertidumbre , Tomografía Computarizada de Haz Cónico , Materiales Dentales , Procesamiento de Imagen Asistido por Computador
13.
Artículo en Inglés | MEDLINE | ID: mdl-37022061

RESUMEN

Indoor fall monitoring is challenging for community-dwelling older adults due to the need for high accuracy and privacy concerns. Doppler radar is promising, given its low cost and contactless sensing mechanism. However, the line-of-sight restriction limits the application of radar sensing in practice, as the Doppler signature will vary when the sensing angle changes, and signal strength will be substantially degraded with large aspect angles. Additionally, the similarity of the Doppler signatures among different fall types makes it extremely challenging for classification. To address these problems, in this paper we first present a comprehensive experimental study to obtain Doppler radar signals under large and arbitrary aspect angles for diverse types of simulated falls and daily living activities. We then develop a novel, explainable, multi-stream, feature-resonated neural network (eMSFRNet) that achieves fall detection and a pioneering study of classifying seven fall types. eMSFRNet is robust to both radar sensing angles and subjects. It is also the first method that can resonate and enhance feature information from noisy/weak Doppler signatures. The multiple feature extractors - including partial pre-trained layers from ResNet, DenseNet, and VGGNet - extracts diverse feature information with various spatial abstractions from a pair of Doppler signals. The feature-resonated-fusion design translates the multi-stream features to a single salient feature that is critical to fall detection and classification. eMSFRNet achieved 99.3% accuracy detecting falls and 76.8% accuracy for classifying seven fall types. Our work is the first effective multistatic robust sensing system that overcomes the challenges associated with Doppler signatures under large and arbitrary aspect angles, via our comprehensible feature-resonated deep neural network. Our work also demonstrates the potential to accommodate different radar monitoring tasks that demand precise and robust sensing.

14.
medRxiv ; 2023 Aug 25.
Artículo en Inglés | MEDLINE | ID: mdl-37662267

RESUMEN

Early detection of Alzheimer's Disease (AD) is crucial to ensure timely interventions and optimize treatment outcomes for patients. While integrating multi-modal neuroimages, such as MRI and PET, has shown great promise, limited research has been done to effectively handle incomplete multi-modal image datasets in the integration. To this end, we propose a deep learning-based framework that employs Mutual Knowledge Distillation (MKD) to jointly model different sub-cohorts based on their respective available image modalities. In MKD, the model with more modalities (e.g., MRI and PET) is considered a teacher while the model with fewer modalities (e.g., only MRI) is considered a student. Our proposed MKD framework includes three key components: First, we design a teacher model that is student-oriented, namely the Student-oriented Multi-modal Teacher (SMT), through multi-modal information disentanglement. Second, we train the student model by not only minimizing its classification errors but also learning from the SMT teacher. Third, we update the teacher model by transfer learning from the student's feature extractor because the student model is trained with more samples. Evaluations on Alzheimer's Disease Neuroimaging Initiative (ADNI) datasets highlight the effectiveness of our method. Our work demonstrates the potential of using AI for addressing the challenges of incomplete multi-modal neuroimage datasets, opening new avenues for advancing early AD detection and treatment strategies.

15.
medRxiv ; 2023 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-37162842

RESUMEN

Early diagnosis of Alzheimer's disease (AD) is an important task that facilitates the development of treatment and prevention strategies and may potentially improve patient outcomes. Neuroimaging has shown great promise, including the amyloid-PET which measures the accumulation of amyloid plaques in the brain - a hallmark of AD. It is desirable to train end-to-end deep learning models to predict the progression of AD for individuals at early stages based on 3D amyloid-PET. However, commonly used models are trained in a fully supervised learning manner and they are inevitably biased toward the given label information. To this end, we propose a self-supervised contrastive learning method to predict AD progression with 3D amyloid-PET. It uses unlabeled data to capture general representations underlying the images. As the downstream task is given as classification, unlike the general self-supervised learning problem that aims to generate task-agnostic representations, we also propose a loss function to utilize the label information in the pre-training. To demonstrate the performance of our method, we conducted experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The results confirmed that the proposed method is capable of providing appropriate data representations, resulting in accurate classification.

16.
Bioengineering (Basel) ; 10(10)2023 Sep 28.
Artículo en Inglés | MEDLINE | ID: mdl-37892871

RESUMEN

Early diagnosis of Alzheimer's disease (AD) is an important task that facilitates the development of treatment and prevention strategies, and may potentially improve patient outcomes. Neuroimaging has shown great promise, including the amyloid-PET, which measures the accumulation of amyloid plaques in the brain-a hallmark of AD. It is desirable to train end-to-end deep learning models to predict the progression of AD for individuals at early stages based on 3D amyloid-PET. However, commonly used models are trained in a fully supervised learning manner, and they are inevitably biased toward the given label information. To this end, we propose a selfsupervised contrastive learning method to accurately predict the conversion to AD for individuals with mild cognitive impairment (MCI) with 3D amyloid-PET. The proposed method, SMoCo, uses both labeled and unlabeled data to capture general semantic representations underlying the images. As the downstream task is given as classification of converters vs. non-converters, unlike the general self-supervised learning problem that aims to generate task-agnostic representations, SMoCo additionally utilizes the label information in the pre-training. To demonstrate the performance of our method, we conducted experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The results confirmed that the proposed method is capable of providing appropriate data representations, resulting in accurate classification. SMoCo showed the best classification performance over the existing methods, with AUROC = 85.17%, accuracy = 81.09%, sensitivity = 77.39%, and specificity = 82.17%. While SSL has demonstrated great success in other application domains of computer vision, this study provided the initial investigation of using a proposed self-supervised contrastive learning model, SMoCo, to effectively predict MCI conversion to AD based on 3D amyloid-PET.

17.
Quant Imaging Med Surg ; 12(4): 2344-2355, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35371946

RESUMEN

Background: It is critical to have a deep learning-based system validated on an external dataset before it is used to assist clinical prognoses. The aim of this study was to assess the performance of an artificial intelligence (AI) system to detect tuberculosis (TB) in a large-scale external dataset. Methods: An artificial, deep convolutional neural network (DCNN) was developed to differentiate TB from other common abnormalities of the lung on large-scale chest X-ray radiographs. An internal dataset with 7,025 images was used to develop the AI system, including images were from five sources in the U.S. and China, after which a 6-year dynamic cohort accumulation dataset with 358,169 images was used to conduct an independent external validation of the trained AI system. Results: The developed AI system provided a delineation of the boundaries of the lung region with a Dice coefficient of 0.958. It achieved an AUC of 0.99 and an accuracy of 0.948 on the internal data set, and an AUC of 0.95 and an accuracy of 0.931 on the external data set when it was used to detect TB from normal images. The AI system achieved an AUC of more than 0.9 on the internal data set, and an AUC of over 0.8 on the external data set when it was applied to detect TB, non-TB abnormal and normal images. Conclusions: We conducted a real-world independent validation, which showed that the trained system can be used as a TB screening tool to flag possible cases for rapid radiologic review and guide further examinations for radiologists.

18.
IISE Trans ; 53(9): 1010-1022, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-37397785

RESUMEN

Multimodality datasets are becoming increasingly common in various domains to provide complementary information for predictive analytics. One significant challenge in fusing multimodality data is that the multiple modalities are not universally available for all samples due to cost and accessibility constraints. This results in a unique data structure called Incomplete Multimodality Dataset (IMD). We propose a novel Incomplete-Multimodality Transfer Learning (IMTL) model that builds a predictive model for each sub-cohort of samples with the same missing modality pattern, and meanwhile couples the model estimation processes for different sub-cohorts to allow for transfer learning. We develop an Expectation-Maximization (EM) algorithm to estimate the parameters of IMTL and further extend it to a collaborative learning paradigm that is specifically valuable for patient privacy preservation in health care applications. We prove two advantageous properties of IMTL: the ability for out-of-sample prediction and a theoretical guarantee for a larger Fisher information compared with models without transfer learning. IMTL is applied to diagnosis and prognosis of the Alzheimer's Disease (AD) at an early stage called Mild Cognitive Impairment (MCI) using incomplete multimodality imaging data. IMTL achieves higher accuracy than competing methods without transfer learning. Supplementary materials are available for this article on the publisher's website.

19.
J Med Imaging (Bellingham) ; 8(Suppl 1): 014501, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33415179

RESUMEN

Purpose: Given the recent COVID-19 pandemic and its stress on global medical resources, presented here is the development of a machine intelligent method for thoracic computed tomography (CT) to inform management of patients on steroid treatment. Approach: Transfer learning has demonstrated strong performance when applied to medical imaging, particularly when only limited data are available. A cascaded transfer learning approach extracted quantitative features from thoracic CT sections using a fine-tuned VGG19 network. The extracted slice features were axially pooled to provide a CT-scan-level representation of thoracic characteristics and a support vector machine was trained to distinguish between patients who required steroid administration and those who did not, with performance evaluated through receiver operating characteristic (ROC) curve analysis. Least-squares fitting was used to assess temporal trends using the transfer learning approach, providing a preliminary method for monitoring disease progression. Results: In the task of identifying patients who should receive steroid treatments, this approach yielded an area under the ROC curve of 0.85 ± 0.10 and demonstrated significant separation between patients who received steroids and those who did not. Furthermore, temporal trend analysis of the prediction score matched expected progression during hospitalization for both groups, with separation at early timepoints prior to convergence near the end of the duration of hospitalization. Conclusions: The proposed cascade deep learning method has strong clinical potential for informing clinical decision-making and monitoring patient treatment.

20.
Transl Res ; 194: 56-67, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29352978

RESUMEN

Alzheimer's disease (AD) is a major neurodegenerative disease and the most common cause of dementia. Currently, no treatment exists to slow down or stop the progression of AD. There is converging belief that disease-modifying treatments should focus on early stages of the disease, that is, the mild cognitive impairment (MCI) and preclinical stages. Making a diagnosis of AD and offering a prognosis (likelihood of converting to AD) at these early stages are challenging tasks but possible with the help of multimodality imaging, such as magnetic resonance imaging (MRI), fluorodeoxyglucose (FDG)-positron emission topography (PET), amyloid-PET, and recently introduced tau-PET, which provides different but complementary information. This article is a focused review of existing research in the recent decade that used statistical machine learning and artificial intelligence methods to perform quantitative analysis of multimodality image data for diagnosis and prognosis of AD at the MCI or preclinical stages. We review the existing work in 3 subareas: diagnosis, prognosis, and methods for handling modality-wise missing data-a commonly encountered problem when using multimodality imaging for prediction or classification. Factors contributing to missing data include lack of imaging equipment, cost, difficulty of obtaining patient consent, and patient drop-off (in longitudinal studies). Finally, we summarize our major findings and provide some recommendations for potential future research directions.


Asunto(s)
Enfermedad de Alzheimer/diagnóstico por imagen , Inteligencia Artificial , Imagen Multimodal/métodos , Disfunción Cognitiva/diagnóstico por imagen , Fluorodesoxiglucosa F18 , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética , Tomografía de Emisión de Positrones , Pronóstico
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda