Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
BMC Med Imaging ; 22(1): 52, 2022 03 22.
Artículo en Inglés | MEDLINE | ID: mdl-35317725

RESUMEN

BACKGROUND: Enteral nutrition through feeding tubes serves as the primary method of nutritional supplementation for patients unable to feed themselves. Plain radiographs are routinely used to confirm the position of the Nasoenteric feeding tubes the following insertion and before the commencement of tube feeds. Convolutional neural networks (CNNs) have shown encouraging results in assisting the tube positioning assessment. However, robust CNNs are often trained using large amounts of manually annotated data, which challenges applying CNNs on enteral feeding tube positioning assessment. METHOD: We build a CNN model for feeding tube positioning assessment by pre-training the model under a weakly supervised fashion on large quantities of radiographs. Since most of the model was pre-trained, a small amount of labeled data is needed when fine-tuning the model for tube positioning assessment. We demonstrate the proposed method using a small dataset with 175 radiographs. RESULT: The experimental result shows that the proposed model improves the area under the receiver operating characteristic curve (AUC) by up to 35.71% , from 0.56 to 0.76, and 14.49% on the accuracy, from 0.69 to 0.79 when compared with the no pre-trained method. The proposed method also has up to 40% less error when estimating its prediction confidence. CONCLUSION: Our evaluation results show that the proposed model has a high prediction accuracy and a more accurate estimated prediction confidence when compared to the no pre-trained model and other baseline models. The proposed method can be potentially used for assessing the enteral tube positioning. It also provides a strong baseline for future studies.


Asunto(s)
Nutrición Enteral , Redes Neurales de la Computación , Humanos , Curva ROC
2.
Artículo en Inglés | MEDLINE | ID: mdl-38829757

RESUMEN

Clinical studies have proved that both structural magnetic resonance imaging (sMRI) and functional magnetic resonance imaging (fMRI) are implicitly associated with neuropsychiatric disorders (NDs), and integrating multi-modal to the binary classification of NDs has been thoroughly explored. However, accurately classifying multiple classes of NDs remains a challenge due to the complexity of disease subclass. In our study, we develop a heterogeneous neural network (H-Net) that integrates sMRI and fMRI modes for classifying multi-class NDs. To account for the differences between the two modes, H-Net adopts a heterogeneous neural network strategy to extract information from each mode. Specifically, H-Net includes an multi-layer perceptron based (MLP-based) encoder, a graph attention network based (GAT-based) encoder, and a cross-modality transformer block. The MLP-based and GAT-based encoders extract semantic features from sMRI and features from fMRI, respectively, while the cross-modality transformer block models the attention of two types of features. In H-Net, the proposed MLP-mixer block and cross-modality alignment are powerful tools for improving the multi-classification performance of NDs. H-Net is validate on the public dataset (CNP), where H-Net achieves 90% classification accuracy in diagnosing multi-class NDs. Furthermore, we demonstrate the complementarity of the two MRI modalities in improving the identification of multi-class NDs. Both visual and statistical analyses show the differences between ND subclasses.

3.
IEEE J Biomed Health Inform ; 27(7): 3372-3383, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37104101

RESUMEN

Segmenting stroke lesions and assessing the thrombolysis in cerebral infarction (TICI) grade are two important but challenging prerequisites for an auxiliary diagnosis of the stroke. However, most previous studies have focused only on a single one of two tasks, without considering the relation between them. In our study, we propose a simulated quantum mechanics-based joint learning network (SQMLP-net) that simultaneously segments a stroke lesion and assesses the TICI grade. The correlation and heterogeneity between the two tasks are tackled with a single-input double-output hybrid network. SQMLP-net has a segmentation branch and a classification branch. These two branches share an encoder, which extracts and shares the spatial and global semantic information for the segmentation and classification tasks. Both tasks are optimized by a novel joint loss function that learns the intra- and inter-task weights between these two tasks. Finally, we evaluate SQMLP-net with a public stroke dataset (ATLAS R2.0). SQMLP-net obtains state-of-the-art metrics (Dice:70.98% and accuracy:86.78%) and outperforms single-task and existing advanced methods. An analysis found a negative correlation between the severity of TICI grading and the accuracy of stroke lesion segmentation.


Asunto(s)
Infarto Cerebral , Accidente Cerebrovascular , Humanos , Infarto Cerebral/diagnóstico por imagen , Accidente Cerebrovascular/diagnóstico por imagen , Benchmarking , Semántica , Procesamiento de Imagen Asistido por Computador
4.
Bioengineering (Basel) ; 10(8)2023 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-37627786

RESUMEN

The COVID-19 pandemic has underscored the urgent need for rapid and accurate diagnosis facilitated by artificial intelligence (AI), particularly in computer-aided diagnosis using medical imaging. However, this context presents two notable challenges: high diagnostic accuracy demand and limited availability of medical data for training AI models. To address these issues, we proposed the implementation of a Masked AutoEncoder (MAE), an innovative self-supervised learning approach, for classifying 2D Chest X-ray images. Our approach involved performing imaging reconstruction using a Vision Transformer (ViT) model as the feature encoder, paired with a custom-defined decoder. Additionally, we fine-tuned the pretrained ViT encoder using a labeled medical dataset, serving as the backbone. To evaluate our approach, we conducted a comparative analysis of three distinct training methods: training from scratch, transfer learning, and MAE-based training, all employing COVID-19 chest X-ray images. The results demonstrate that MAE-based training produces superior performance, achieving an accuracy of 0.985 and an AUC of 0.9957. We explored the mask ratio influence on MAE and found ratio = 0.4 shows the best performance. Furthermore, we illustrate that MAE exhibits remarkable efficiency when applied to labeled data, delivering comparable performance to utilizing only 30% of the original training dataset. Overall, our findings highlight the significant performance enhancement achieved by using MAE, particularly when working with limited datasets. This approach holds profound implications for future disease diagnosis, especially in scenarios where imaging information is scarce.

5.
Electronics (Basel) ; 12(2)2023 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-36778519

RESUMEN

Three-dimensional convolutional neural networks (3D CNNs) have been widely applied to analyze Alzheimer's disease (AD) brain images for a better understanding of the disease progress or predicting the conversion from cognitively impaired (CU) or mild cognitive impairment status. It is well-known that training 3D-CNN is computationally expensive and with the potential of overfitting due to the small sample size available in the medical imaging field. Here we proposed a novel 3D-2D approach by converting a 3D brain image to a 2D fused image using a Learnable Weighted Pooling (LWP) method to improve efficient training and maintain comparable model performance. By the 3D-to-2D conversion, the proposed model can easily forward the fused 2D image through a pre-trained 2D model while achieving better performance over different 3D and 2D baselines. In the implementation, we chose to use ResNet34 for feature extraction as it outperformed other 2D CNN backbones. We further showed that the weights of the slices are location-dependent and the model performance relies on the 3D-to-2D fusion view, with the best outcomes from the coronal view. With the new approach, we were able to reduce 75% of the training time and increase the accuracy to 0.88, compared with conventional 3D CNNs, for classifying amyloid-beta PET imaging from the AD patients from the CU participants using the publicly available Alzheimer's Disease Neuroimaging Initiative dataset. The novel 3D-2D model may have profound implications for timely AD diagnosis in clinical settings in the future.

6.
Front Neuroinform ; 16: 859973, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35600503

RESUMEN

The encoder-decoder-based deep convolutional neural networks (CNNs) have made great improvements in medical image segmentation tasks. However, due to the inherent locality of convolution, CNNs generally are demonstrated to have limitations in obtaining features across layers and long-range features from the medical image. In this study, we develop a local-long range hybrid features network (LLRHNet), which inherits the merits of the iterative aggregation mechanism and the transformer technology, as a medical image segmentation model. LLRHNet adopts encoder-decoder architecture as the backbone which iteratively aggregates the projection and up-sampling to fuse local low-high resolution features across isolated layers. The transformer adopts the multi-head self-attention mechanism to extract long-range features from the tokenized image patches and fuses these features with the local-range features extracted by down-sampling operation in the backbone network. These hybrid features are used to assist the cascaded up-sampling operations to local the position of the target tissues. LLRHNet is evaluated on two multiple lesions medical image data sets, including a public liver-related segmentation data set (3DIRCADb) and an in-house stroke and white matter hyperintensity (SWMH) segmentation data set. Experimental results denote that LLRHNet achieves state-of-the-art performance on both data sets.

7.
IEEE J Biomed Health Inform ; 26(4): 1640-1649, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34495856

RESUMEN

A key challenge in training neural networks for a given medical imaging task is the difficulty of obtaining a sufficient number of manually labeled examples. In contrast, textual imaging reports are often readily available in medical records and contain rich but unstructured interpretations written by experts as part of standard clinical practice. We propose using these textual reports as a form of weak supervision to improve the image interpretation performance of a neural network without requiring additional manually labeled examples. We use an image-text matching task to train a feature extractor and then fine-tune it in a transfer learning setting for a supervised task using a small labeled dataset. The end result is a neural network that automatically interprets imagery without requiring textual reports during inference. We evaluate our method on three classification tasks and find consistent performance improvements, reducing the need for labeled data by 67%-98%.


Asunto(s)
Diagnóstico por Imagen , Redes Neurales de la Computación , Humanos , Radiografía
8.
Front Neurosci ; 16: 832276, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35692429

RESUMEN

Multi-modal magnetic resonance imaging (MRI) is widely used for diagnosing brain disease in clinical practice. However, the high-dimensionality of MRI images is challenging when training a convolution neural network. In addition, utilizing multiple MRI modalities jointly is even more challenging. We developed a method using decomposition-based correlation learning (DCL). To overcome the above challenges, we used a strategy to capture the complex relationship between structural MRI and functional MRI data. Under the guidance of matrix decomposition, DCL takes into account the spike magnitude of leading eigenvalues, the number of samples, and the dimensionality of the matrix. A canonical correlation analysis (CCA) was used to analyze the correlation and construct matrices. We evaluated DCL in the classification of multiple neuropsychiatric disorders listed in the Consortium for Neuropsychiatric Phenomics (CNP) dataset. In experiments, our method had a higher accuracy than several existing methods. Moreover, we found interesting feature connections from brain matrices based on DCL that can differentiate disease and normal cases and different subtypes of the disease. Furthermore, we extended experiments on a large sample size dataset and a small sample size dataset, compared with several other well-established methods that were designed for the multi neuropsychiatric disorder classification; our proposed method achieved state-of-the-art performance on all three datasets.

9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 3849-3853, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36085751

RESUMEN

Deep neural networks (DNNs) are the primary driving force for the current development of medical imaging analysis tools and often provide exciting performance on various tasks. However, such results are usually reported on the overall performance of DNNs, such as the Peak signal-to-noise ratio (PSNR) or mean square error (MSE) for imaging generation tasks. As a black-box, DNNs usually produce a relatively stable performance on the same task across multiple training trials, while the learned feature spaces could be significantly different. We believe additional insightful analysis, such as uncertainty analysis of the learned feature space, is equally important, if not more. Through this work, we evaluate the learned feature space of multiple U-Net architectures for image generation tasks using computational analysis and clustering analysis methods. We demonstrate that the learned feature spaces are easily separable between different training trials of the same architecture with the same hyperparameter setting, indicating the models using different criteria for the same tasks. This phenomenon naturally raises the question of which criteria are correct to use. Thus, our work suggests that assessments other than overall performance are needed before applying a DNN model to real-world practice.


Asunto(s)
Diagnóstico por Imagen , Redes Neurales de la Computación , Incertidumbre
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3008-3012, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891877

RESUMEN

Alzheimer's disease (AD) is a non-treatable and non-reversible disease that affects about 6% of people who are 65 and older. Brain magnetic resonance imaging (MRI) is a pseudo-3D imaging technology that is widely used for AD diagnosis. Convolutional neural networks with 3D kernels (3D CNNs) are often the default choice for deep learning based MRI analysis. However, 3D CNNs are usually computationally costly and data-hungry. Such disadvantages post a barrier of using modern deep learning techniques in the medical imaging domain, in which the number of data that can be used for training is usually limited. In this work, we propose three approaches that leverage 2D CNNs on 3D MRI data. We test the proposed methods on the Alzheimer's Disease Neuroimaging Initiative dataset across two popular 2D CNN architectures. The evaluation results show that the proposed method improves the model performance on AD diagnosis by 8.33% accuracy or 10.11% auROC compared with the ResNet-based 3D CNN model, while significantly reducing the training time by over 89%. We also discuss the potential causes for performance improvement and the limitations. We believe this work can serve as a strong baseline for future researchers.


Asunto(s)
Enfermedad de Alzheimer , Enfermedad de Alzheimer/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Neuroimagen
11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3586-3591, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34892014

RESUMEN

Alzheimer's disease (AD) is a devastating neurological disorder primarily affecting the elderly. An estimated 6.2 million Americans age 65 and older are suffering from Alzheimer's dementia today. Brain magnetic resonance imaging (MRI) is widely used for the clinical diagnosis of AD. In the meanwhile, medical researchers have identified 40 risk locus using single-nucleotide polymorphisms (SNPs) information from Genome-wide association study (GWAS) in the past decades. However, existing studies usually treat MRI and GWAS separately. For instance, convolutional neural networks are often trained using MRI for AD diagnosis. GWAS and SNPs are frequently used to identify genomic traits. In this study, we propose a multi-modal AD diagnosis neural network that uses both MRIs and SNPs. The proposed method demonstrates a novel way to use GWAS findings by directly including SNPs in predictive models. We test the proposed methods on the Alzheimer's Disease Neuroimaging Initiative dataset. The evaluation results show that the proposed method improves the model performance on AD diagnosis and achieves 93.5% AUC and 96.1% AP, respectively, when patients have both MRI and SNP data. We believe this work brings exciting new insights to GWAS applications and sheds light on future research directions.


Asunto(s)
Enfermedad de Alzheimer , Anciano , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/genética , Análisis de Datos , Estudio de Asociación del Genoma Completo , Humanos , Imagen por Resonancia Magnética , Neuroimagen
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1124-1127, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33018184

RESUMEN

The use of deep learning methods has dramatically increased the state-of-the-art performance in image object localization. However, commonly used supervised learning methods require large training datasets with pixel-level or bounding box annotations. Obtaining such fine-grained annotations is extremely costly, especially in the medical imaging domain. In this work, we propose a novel weakly supervised method for breast cancer localization. The essential advantage of our approach is that the model only requires image-level labels and uses a self-training strategy to refine the predicted localization in a step-wise manner. We evaluated our approach on a large, clinically relevant mammogram dataset. The results show that our model significantly improves performance compared to other methods trained similarly.


Asunto(s)
Neoplasias de la Mama , Neoplasias de la Mama/diagnóstico por imagen , Humanos
13.
J Am Coll Radiol ; 17(6): 796-803, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32068005

RESUMEN

OBJECTIVES: Performance of recently developed deep learning models for image classification surpasses that of radiologists. However, there are questions about model performance consistency and generalization in unseen external data. The purpose of this study is to determine whether the high performance of deep learning on mammograms can be transferred to external data with a different data distribution. MATERIALS AND METHODS: Six deep learning models (three published models with high performance and three models designed by us) were evaluated on four different mammogram data sets, including three public (Digital Database for Screening Mammography, INbreast, and Mammographic Image Analysis Society) and one private data set (UKy). The models were trained and validated on either Digital Database for Screening Mammography alone or a combined data set that included Digital Database for Screening Mammography. The models were then tested on the three external data sets. The area under the receiver operating characteristic curve (auROC) was used to evaluate model performance. RESULTS: The three published models reported validation auROC scores between 0.88 and 0.95 on the validation data set. Our models achieved between 0.71 (95% confidence interval [CI]: 0.70-0.72) and 0.79 (95% CI: 0.78-0.80) auROC on the same validation data set. However, the same evaluation criteria of all six models on the three external test data sets were significantly decreased, only between 0.44 (95% CI: 0.43-0.45) and 0.65 (95% CI: 0.64-0.66). CONCLUSION: Our results demonstrate performance inconsistency across the data sets and models, indicating that the high performance of deep learning models on one data set cannot be readily transferred to unseen external data sets, and these models need further assessment and validation before being applied in clinical practice.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Neoplasias de la Mama/diagnóstico por imagen , Detección Precoz del Cáncer , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Mamografía
14.
Comput Vis ECCV ; 12535: 355-364, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37283785

RESUMEN

We propose to apply a 2D CNN architecture to 3D MRI image Alzheimer's disease classification. Training a 3D convolutional neural network (CNN) is time-consuming and computationally expensive. We make use of approximate rank pooling to transform the 3D MRI image volume into a 2D image to use as input to a 2D CNN. We show our proposed CNN model achieves 9.5% better Alzheimer's disease classification accuracy than the baseline 3D models. We also show that our method allows for efficient training, requiring only 20% of the training time compared to 3D CNN models. The code is available online: https://github.com/UkyVision/alzheimer-project.

15.
Commun Biol ; 3(1): 352, 2020 07 06.
Artículo en Inglés | MEDLINE | ID: mdl-32632135

RESUMEN

Clinical trials focusing on therapeutic candidates that modify ß-amyloid (Aß) have repeatedly failed to treat Alzheimer's disease (AD), suggesting that Aß may not be the optimal target for treating AD. The evaluation of Aß, tau, and neurodegenerative (A/T/N) biomarkers has been proposed for classifying AD. However, it remains unclear whether disturbances in each arm of the A/T/N framework contribute equally throughout the progression of AD. Here, using the random forest machine learning method to analyze participants in the Alzheimer's Disease Neuroimaging Initiative dataset, we show that A/T/N biomarkers show varying importance in predicting AD development, with elevated biomarkers of Aß and tau better predicting early dementia status, and biomarkers of neurodegeneration, especially glucose hypometabolism, better predicting later dementia status. Our results suggest that AD treatments may also need to be disease stage-oriented with Aß and tau as targets in early AD and glucose metabolism as a target in later AD.


Asunto(s)
Enfermedad de Alzheimer/patología , Péptidos beta-Amiloides/metabolismo , Glucosa/metabolismo , Proteínas tau/metabolismo , Anciano , Algoritmos , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/metabolismo , Biomarcadores/metabolismo , Encéfalo/diagnóstico por imagen , Encéfalo/metabolismo , Encéfalo/patología , Progresión de la Enfermedad , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Pruebas de Estado Mental y Demencia , Neuroimagen , Tomografía de Emisión de Positrones
16.
IEEE Trans Nanobioscience ; 18(3): 296-305, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-30990432

RESUMEN

Rheumatoid arthritis (RA) is an autoimmune disease whose common manifestation involves the slow destruction of joint tissue, a damage that is visible in a radiograph. Over time, this damage causes pain and loss of functioning, which depends, to some extent, on the spatial deformation induced by the joint damage. Building an accurate model of the current deformation and predicting potential future deformations are the important components of treatment planning. Unfortunately, this is currently a time-consuming and labor-intensive manual process. To address this problem, we propose a fully automated approach for fitting a shape model to the long bones of the hand from a single radiograph. Critically, our shape model allows sufficient flexibility to be useful for patients in various stages of RA. Our approach uses a deep convolutional neural network to extract low-level features and a conditional random field (CRF) to support shape inference. Our approach is significantly more accurate than previous work that used hand-engineered features. We provide a comprehensive evaluation for various choices of network hyperparameters, as current best practices lack significantly in this domain. We evaluate the accuracy of our pipeline on two large datasets of hand radiographs and highlight the importance of the low-level features, the relative contribution of different potential functions in the CRF, and the accuracy of the final shape estimates. Our approach is nearly as accurate as a trained radiologist and, because it only requires a few seconds per radiograph, can be applied to large datasets to enable better modeling of disease progression.


Asunto(s)
Huesos de la Mano/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Radiografía/métodos , Algoritmos , Artritis Reumatoide/diagnóstico por imagen , Bases de Datos Factuales , Humanos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA