Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
J Imaging Inform Med ; 37(1): 412-427, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38343221

RESUMO

This paper presents a fully automated pipeline using a sparse convolutional autoencoder for quality control (QC) of affine registrations in large-scale T1-weighted (T1w) and T2-weighted (T2w) magnetic resonance imaging (MRI) studies. Here, a customized 3D convolutional encoder-decoder (autoencoder) framework is proposed and the network is trained in a fully unsupervised manner. For cross-validating the proposed model, we used 1000 correctly aligned MRI images of the human connectome project young adult (HCP-YA) dataset. We proposed that the quality of the registration is proportional to the reconstruction error of the autoencoder. Further, to make this method applicable to unseen datasets, we have proposed dataset-specific optimal threshold calculation (using the reconstruction error) from ROC analysis that requires a subset of the correctly aligned and artificially generated misalignments specific to that dataset. The calculated optimum threshold is used for testing the quality of remaining affine registrations from the corresponding datasets. The proposed framework was tested on four unseen datasets from autism brain imaging data exchange (ABIDE I, 215 subjects), information eXtraction from images (IXI, 577 subjects), Open Access Series of Imaging Studies (OASIS4, 646 subjects), and "Food and Brain" study (77 subjects). The framework has achieved excellent performance for T1w and T2w affine registrations with an accuracy of 100% for HCP-YA. Further, we evaluated the generality of the model on four unseen datasets and obtained accuracies of 81.81% for ABIDE I (only T1w), 93.45% (T1w) and 81.75% (T2w) for OASIS4, and 92.59% for "Food and Brain" study (only T1w) and in the range 88-97% for IXI (for both T1w and T2w and stratified concerning scanner vendor and magnetic field strengths). Moreover, the real failures from "Food and Brain" and OASIS4 datasets were detected with sensitivities of 100% and 80% for T1w and T2w, respectively. In addition, AUCs of > 0.88 in all scenarios were obtained during threshold calculation on the four test sets.

2.
Curr Med Imaging ; 20: 1-14, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38389342

RESUMO

Computed tomography (CT) scans are widely used to diagnose lung conditions due to their ability to provide a detailed overview of the body's respiratory system. Despite its popularity, visual examination of CT scan images can lead to misinterpretations that impede a timely diagnosis. Utilizing technology to evaluate images for disease detection is also a challenge. As a result, there is a significant demand for more advanced systems that can accurately classify lung diseases from CT scan images. In this work, we provide an extensive analysis of different approaches and their performances that can help young researchers to build more advanced systems. First, we briefly introduce diagnosis and treatment procedures for various lung diseases. Then, a brief description of existing methods used for the classification of lung diseases is presented. Later, an overview of the general procedures for lung disease classification using machine learning (ML) is provided. Furthermore, an overview of recent progress in ML-based classification of lung diseases is provided. Finally, existing challenges in ML techniques are presented. It is concluded that deep learning techniques have revolutionized the early identification of lung disorders. We expect that this work will equip medical professionals with the awareness they require in order to recognize and classify certain medical disorders.


Assuntos
Aprendizado Profundo , Pneumopatias , Tomografia Computadorizada por Raios X , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico , Aprendizado de Máquina , Tomografia Computadorizada por Raios X/métodos , Pneumopatias/classificação , Pneumopatias/diagnóstico por imagem
3.
Diagnostics (Basel) ; 13(4)2023 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-36832110

RESUMO

Diabetic retinopathy (DR) is one of the major complications caused by diabetes and is usually identified from retinal fundus images. Screening of DR from digital fundus images could be time-consuming and error-prone for ophthalmologists. For efficient DR screening, good quality of the fundus image is essential and thereby reduces diagnostic errors. Hence, in this work, an automated method for quality estimation (QE) of digital fundus images using an ensemble of recent state-of-the-art EfficientNetV2 deep neural network models is proposed. The ensemble method was cross-validated and tested on one of the largest openly available datasets, the Deep Diabetic Retinopathy Image Dataset (DeepDRiD). We obtained a test accuracy of 75% for the QE, outperforming the existing methods on the DeepDRiD. Hence, the proposed ensemble method may be a potential tool for automated QE of fundus images and could be handy to ophthalmologists.

4.
Comput Biol Med ; 139: 104997, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34753079

RESUMO

BACKGROUND: Magnetic resonance imaging (MRI)-based morphometry and relaxometry are proven methods for the structural assessment of the human brain in several neurological disorders. These procedures are generally based on T1-weighted (T1w) and/or T2-weighted (T2w) MRI scans, and rigid and affine registrations to a standard template(s) are essential steps in such studies. Therefore, a fully automatic quality control (QC) of these registrations is necessary in big data scenarios to ensure that they are suitable for subsequent processing. METHOD: A supervised machine learning (ML) framework is proposed by computing similarity metrics such as normalized cross-correlation, normalized mutual information, and correlation ratio locally. We have used these as candidate features for cross-validation and testing of different ML classifiers. For 5-fold repeated stratified grid search cross-validation, 400 correctly aligned, 2000 randomly generated misaligned images were used from the human connectome project young adult (HCP-YA) dataset. To test the cross-validated models, the datasets from autism brain imaging data exchange (ABIDE I) and information eXtraction from images (IXI) were used. RESULTS: The ensemble classifiers, random forest, and AdaBoost yielded best performance with F1-scores, balanced accuracies, and Matthews correlation coefficients in the range of 0.95-1.00 during cross-validation. The predictive accuracies reached 0.99 on the Test set #1 (ABIDE I), 0.99 without and 0.96 with noise on Test set #2 (IXI, stratified w.r.t scanner vendor and field strength). CONCLUSIONS: The cross-validated and tested ML models could be used for QC of both T1w and T2w rigid and affine registrations in large-scale MRI studies.


Assuntos
Big Data , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Controle de Qualidade , Aprendizado de Máquina Supervisionado , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA