Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34.271
Filtrar
Mais filtros








Intervalo de ano de publicação
1.
Radiol Cardiothorac Imaging ; 6(3): e230177, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38722232

RESUMO

Purpose To develop a deep learning model for increasing cardiac cine frame rate while maintaining spatial resolution and scan time. Materials and Methods A transformer-based model was trained and tested on a retrospective sample of cine images from 5840 patients (mean age, 55 years ± 19 [SD]; 3527 male patients) referred for clinical cardiac MRI from 2003 to 2021 at nine centers; images were acquired using 1.5- and 3-T scanners from three vendors. Data from three centers were used for training and testing (4:1 ratio). The remaining data were used for external testing. Cines with downsampled frame rates were restored using linear, bicubic, and model-based interpolation. The root mean square error between interpolated and original cine images was modeled using ordinary least squares regression. In a prospective study of 49 participants referred for clinical cardiac MRI (mean age, 56 years ± 13; 25 male participants) and 12 healthy participants (mean age, 51 years ± 16; eight male participants), the model was applied to cines acquired at 25 frames per second (fps), thereby doubling the frame rate, and these interpolated cines were compared with actual 50-fps cines. The preference of two readers based on perceived temporal smoothness and image quality was evaluated using a noninferiority margin of 10%. Results The model generated artifact-free interpolated images. Ordinary least squares regression analysis accounting for vendor and field strength showed lower error (P < .001) with model-based interpolation compared with linear and bicubic interpolation in internal and external test sets. The highest proportion of reader choices was "no preference" (84 of 122) between actual and interpolated 50-fps cines. The 90% CI for the difference between reader proportions favoring collected (15 of 122) and interpolated (23 of 122) high-frame-rate cines was -0.01 to 0.14, indicating noninferiority. Conclusion A transformer-based deep learning model increased cardiac cine frame rates while preserving both spatial resolution and scan time, resulting in images with quality comparable to that of images obtained at actual high frame rates. Keywords: Functional MRI, Heart, Cardiac, Deep Learning, High Frame Rate Supplemental material is available for this article. © RSNA, 2024.


Assuntos
Aprendizado Profundo , Imagem Cinética por Ressonância Magnética , Humanos , Masculino , Imagem Cinética por Ressonância Magnética/métodos , Pessoa de Meia-Idade , Feminino , Estudos Prospectivos , Estudos Retrospectivos , Coração/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos
2.
BMC Med Imaging ; 24(1): 107, 2024 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-38734629

RESUMO

This study addresses the critical challenge of detecting brain tumors using MRI images, a pivotal task in medical diagnostics that demands high accuracy and interpretability. While deep learning has shown remarkable success in medical image analysis, there remains a substantial need for models that are not only accurate but also interpretable to healthcare professionals. The existing methodologies, predominantly deep learning-based, often act as black boxes, providing little insight into their decision-making process. This research introduces an integrated approach using ResNet50, a deep learning model, combined with Gradient-weighted Class Activation Mapping (Grad-CAM) to offer a transparent and explainable framework for brain tumor detection. We employed a dataset of MRI images, enhanced through data augmentation, to train and validate our model. The results demonstrate a significant improvement in model performance, with a testing accuracy of 98.52% and precision-recall metrics exceeding 98%, showcasing the model's effectiveness in distinguishing tumor presence. The application of Grad-CAM provides insightful visual explanations, illustrating the model's focus areas in making predictions. This fusion of high accuracy and explainability holds profound implications for medical diagnostics, offering a pathway towards more reliable and interpretable brain tumor detection tools.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos
3.
Radiology ; 311(2): e230750, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38713024

RESUMO

Background Multiparametric MRI (mpMRI) improves prostate cancer (PCa) detection compared with systematic biopsy, but its interpretation is prone to interreader variation, which results in performance inconsistency. Artificial intelligence (AI) models can assist in mpMRI interpretation, but large training data sets and extensive model testing are required. Purpose To evaluate a biparametric MRI AI algorithm for intraprostatic lesion detection and segmentation and to compare its performance with radiologist readings and biopsy results. Materials and Methods This secondary analysis of a prospective registry included consecutive patients with suspected or known PCa who underwent mpMRI, US-guided systematic biopsy, or combined systematic and MRI/US fusion-guided biopsy between April 2019 and September 2022. All lesions were prospectively evaluated using Prostate Imaging Reporting and Data System version 2.1. The lesion- and participant-level performance of a previously developed cascaded deep learning algorithm was compared with histopathologic outcomes and radiologist readings using sensitivity, positive predictive value (PPV), and Dice similarity coefficient (DSC). Results A total of 658 male participants (median age, 67 years [IQR, 61-71 years]) with 1029 MRI-visible lesions were included. At histopathologic analysis, 45% (294 of 658) of participants had lesions of International Society of Urological Pathology (ISUP) grade group (GG) 2 or higher. The algorithm identified 96% (282 of 294; 95% CI: 94%, 98%) of all participants with clinically significant PCa, whereas the radiologist identified 98% (287 of 294; 95% CI: 96%, 99%; P = .23). The algorithm identified 84% (103 of 122), 96% (152 of 159), 96% (47 of 49), 95% (38 of 40), and 98% (45 of 46) of participants with ISUP GG 1, 2, 3, 4, and 5 lesions, respectively. In the lesion-level analysis using radiologist ground truth, the detection sensitivity was 55% (569 of 1029; 95% CI: 52%, 58%), and the PPV was 57% (535 of 934; 95% CI: 54%, 61%). The mean number of false-positive lesions per participant was 0.61 (range, 0-3). The lesion segmentation DSC was 0.29. Conclusion The AI algorithm detected cancer-suspicious lesions on biparametric MRI scans with a performance comparable to that of an experienced radiologist. Moreover, the algorithm reliably predicted clinically significant lesions at histopathologic examination. ClinicalTrials.gov Identifier: NCT03354416 © RSNA, 2024 Supplemental material is available for this article.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Idoso , Estudos Prospectivos , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Pessoa de Meia-Idade , Algoritmos , Próstata/diagnóstico por imagem , Próstata/patologia , Biópsia Guiada por Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
4.
Comput Biol Med ; 175: 108412, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38691914

RESUMO

Brain tumor segmentation and classification play a crucial role in the diagnosis and treatment planning of brain tumors. Accurate and efficient methods for identifying tumor regions and classifying different tumor types are essential for guiding medical interventions. This study comprehensively reviews brain tumor segmentation and classification techniques, exploring various approaches based on image processing, machine learning, and deep learning. Furthermore, our study aims to review existing methodologies, discuss their advantages and limitations, and highlight recent advancements in this field. The impact of existing segmentation and classification techniques for automated brain tumor detection is also critically examined using various open-source datasets of Magnetic Resonance Images (MRI) of different modalities. Moreover, our proposed study highlights the challenges related to segmentation and classification techniques and datasets having various MRI modalities to enable researchers to develop innovative and robust solutions for automated brain tumor detection. The results of this study contribute to the development of automated and robust solutions for analyzing brain tumors, ultimately aiding medical professionals in making informed decisions and providing better patient care.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos , Neuroimagem/métodos
5.
Comput Biol Med ; 175: 108459, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38701588

RESUMO

Diabetic retinopathy (DR) is the most common diabetic complication, which usually leads to retinal damage, vision loss, and even blindness. A computer-aided DR grading system has a significant impact on helping ophthalmologists with rapid screening and diagnosis. Recent advances in fundus photography have precipitated the development of novel retinal imaging cameras and their subsequent implementation in clinical practice. However, most deep learning-based algorithms for DR grading demonstrate limited generalization across domains. This inferior performance stems from variance in imaging protocols and devices inducing domain shifts. We posit that declining model performance between domains arises from learning spurious correlations in the data. Incorporating do-operations from causality analysis into model architectures may mitigate this issue and improve generalizability. Specifically, a novel universal structural causal model (SCM) was proposed to analyze spurious correlations in fundus imaging. Building on this, a causality-inspired diabetic retinopathy grading framework named CauDR was developed to eliminate spurious correlations and achieve more generalizable DR diagnostics. Furthermore, existing datasets were reorganized into 4DR benchmark for DG scenario. Results demonstrate the effectiveness and the state-of-the-art (SOTA) performance of CauDR. Diabetic retinopathy (DR) is the most common diabetic complication, which usually leads to retinal damage, vision loss, and even blindness. A computer-aided DR grading system has a significant impact on helping ophthalmologists with rapid screening and diagnosis. Recent advances in fundus photography have precipitated the development of novel retinal imaging cameras and their subsequent implementation in clinical practice. However, most deep learning-based algorithms for DR grading demonstrate limited generalization across domains. This inferior performance stems from variance in imaging protocols and devices inducing domain shifts. We posit that declining model performance between domains arises from learning spurious correlations in the data. Incorporating do-operations from causality analysis into model architectures may mitigate this issue and improve generalizability. Specifically, a novel universal structural causal model (SCM) was proposed to analyze spurious correlations in fundus imaging. Building on this, a causality-inspired diabetic retinopathy grading framework named CauDR was developed to eliminate spurious correlations and achieve more generalizable DR diagnostics. Furthermore, existing datasets were reorganized into 4DR benchmark for DG scenario. Results demonstrate the effectiveness and the state-of-the-art (SOTA) performance of CauDR.


Assuntos
Retinopatia Diabética , Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/diagnóstico , Humanos , Fundo de Olho , Algoritmos , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos
6.
Comput Biol Med ; 175: 108523, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38701591

RESUMO

Diabetic retinopathy is considered one of the most common diseases that can lead to blindness in the working age, and the chance of developing it increases as long as a person suffers from diabetes. Protecting the sight of the patient or decelerating the evolution of this disease depends on its early detection as well as identifying the exact levels of this pathology, which is done manually by ophthalmologists. This manual process is very consuming in terms of the time and experience of an expert ophthalmologist, which makes developing an automated method to aid in the diagnosis of diabetic retinopathy an essential and urgent need. In this paper, we aim to propose a new hybrid deep learning method based on a fine-tuning vision transformer and a modified capsule network for automatic diabetic retinopathy severity level prediction. The proposed approach consists of a new range of computer vision operations, including the power law transformation technique and the contrast-limiting adaptive histogram equalization technique in the preprocessing step. While the classification step builds up on a fine-tuning vision transformer, a modified capsule network, and a classification model combined with a classification model, The effectiveness of our approach was evaluated using four datasets, including the APTOS, Messidor-2, DDR, and EyePACS datasets, for the task of severity levels of diabetic retinopathy. We have attained excellent test accuracy scores on the four datasets, respectively: 88.18%, 87.78%, 80.36%, and 78.64%. Comparing our results with the state-of-the-art, we reached a better performance.


Assuntos
Aprendizado Profundo , Retinopatia Diabética , Retinopatia Diabética/diagnóstico por imagem , Humanos , Redes Neurais de Computação , Bases de Dados Factuais , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos
7.
Curr Med Imaging ; 20(1): e15734056269264, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38766836

RESUMO

BACKGROUND: Currently, it is difficult to find a solution to the inverse inappropriate problem, which involves restoring a high-resolution image from a lowresolution image contained within a single image. In nature photography, one can capture a wide variety of objects and textures, each with its own characteristics, most notably the high-frequency component. These qualities can be distinguished from each other by looking at the pictures. OBJECTIVE: The goal is to develop an automated approach to identify thyroid nodules on ultrasound images. The aim of this research is to accurately differentiate thyroid nodules using Deep Learning Technique and to evaluate the effectiveness of different localization techniques. METHODS: The method used in this research is to reconstruct a single super-resolution image based on segmentation and classification. The poor-quality ultrasound image is divided into several parts, and the best applicable classification is chosen for each component. Pairs of high- and lowresolution images belonging to the same class are found and used to figure out which image is high-resolution for each segment. Deep learning technology, specifically the Adam classifier, is used to identify carcinoid tumors within thyroid nodules. Measures, such as localization accuracy, sensitivity, specificity, dice loss, ROC, and area under the curve (AUC), are used to evaluate the effectiveness of the techniques. RESULTS: The results of the proposed method are superior, both statistically and qualitatively, compared to other methods that are considered one of the latest and best technologies. The developed automated approach shows promising results in accurately identifying thyroid nodules on ultrasound images. CONCLUSION: The research demonstrates the development of an automated approach to identify thyroid nodules within ultrasound images using super-resolution single-image reconstruction and deep learning technology. The results indicate that the proposed method is superior to the latest and best techniques in terms of accuracy and quality. This research contributes to the advancement of medical imaging and holds the potential to improve the diagnosis and treatment of thyroid nodules.

.


Assuntos
Aprendizado Profundo , Nódulo da Glândula Tireoide , Ultrassonografia , Humanos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Ultrassonografia/métodos , Glândula Tireoide/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
8.
BMC Med Imaging ; 24(1): 118, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38773391

RESUMO

Brain tumor diagnosis using MRI scans poses significant challenges due to the complex nature of tumor appearances and variations. Traditional methods often require extensive manual intervention and are prone to human error, leading to misdiagnosis and delayed treatment. Current approaches primarily include manual examination by radiologists and conventional machine learning techniques. These methods rely heavily on feature extraction and classification algorithms, which may not capture the intricate patterns present in brain MRI images. Conventional techniques often suffer from limited accuracy and generalizability, mainly due to the high variability in tumor appearance and the subjective nature of manual interpretation. Additionally, traditional machine learning models may struggle with the high-dimensional data inherent in MRI images. To address these limitations, our research introduces a deep learning-based model utilizing convolutional neural networks (CNNs).Our model employs a sequential CNN architecture with multiple convolutional, max-pooling, and dropout layers, followed by dense layers for classification. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The precision, recall, and F1-scores ranging from 97 to 98% with a roc-auc ranging from 99 to 100% for each tumor category further substantiate the model's effectiveness. Additionally, the utilization of Grad-CAM visualizations provides insights into the model's decision-making process, enhancing interpretability. This research addresses the pressing need for enhanced diagnostic accuracy in identifying brain tumors through MRI imaging, tackling challenges such as variability in tumor appearance and the need for rapid, reliable diagnostic tools.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Masculino , Feminino
9.
BMC Med Imaging ; 24(1): 110, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38750436

RESUMO

Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Aprendizado de Máquina , Interpretação de Imagem Assistida por Computador/métodos
10.
Artif Intell Med ; 152: 102872, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38701636

RESUMO

Accurately measuring the evolution of Multiple Sclerosis (MS) with magnetic resonance imaging (MRI) critically informs understanding of disease progression and helps to direct therapeutic strategy. Deep learning models have shown promise for automatically segmenting MS lesions, but the scarcity of accurately annotated data hinders progress in this area. Obtaining sufficient data from a single clinical site is challenging and does not address the heterogeneous need for model robustness. Conversely, the collection of data from multiple sites introduces data privacy concerns and potential label noise due to varying annotation standards. To address this dilemma, we explore the use of the federated learning framework while considering label noise. Our approach enables collaboration among multiple clinical sites without compromising data privacy under a federated learning paradigm that incorporates a noise-robust training strategy based on label correction. Specifically, we introduce a Decoupled Hard Label Correction (DHLC) strategy that considers the imbalanced distribution and fuzzy boundaries of MS lesions, enabling the correction of false annotations based on prediction confidence. We also introduce a Centrally Enhanced Label Correction (CELC) strategy, which leverages the aggregated central model as a correction teacher for all sites, enhancing the reliability of the correction process. Extensive experiments conducted on two multi-site datasets demonstrate the effectiveness and robustness of our proposed methods, indicating their potential for clinical applications in multi-site collaborations to train better deep learning models with lower cost in data collection and annotation.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Esclerose Múltipla , Esclerose Múltipla/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
11.
Comput Methods Programs Biomed ; 250: 108205, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38703435

RESUMO

The pancreas is a vital organ in digestive system which has significant health implications. It is imperative to evaluate and identify malignant pancreatic lesions promptly in light of the high mortality rate linked to such malignancies. Endoscopic Ultrasound (EUS) is a non-invasive precise technique to detect pancreas disorders, but it is highly operator dependent. Artificial intelligence (AI), including traditional machine learning (ML) and deep learning (DL) techniques can play a pivotal role to enhancing the performance of EUS regardless of operator. AI performs a critical function in the detection, classification, and segmentation of medical images. The utilization of AI-assisted systems has improved the accuracy and productivity of pancreatic analysis, including the detection of diverse pancreatic disorders (e.g., pancreatitis, masses, and cysts) as well as landmarks and parenchyma. This systematic review examines the rapidly developing domain of AI-assisted system in EUS of the pancreas. Its objective is to present a thorough study of the present research status and developments in this area. This paper explores the significant challenges of AI-assisted system in pancreas EUS imaging, highlights the potential of AI techniques in addressing these challenges, and suggests the scope for future research in domain of AI-assisted EUS systems.


Assuntos
Inteligência Artificial , Endossonografia , Pâncreas , Humanos , Endossonografia/métodos , Pâncreas/diagnóstico por imagem , Aprendizado de Máquina , Aprendizado Profundo , Neoplasias Pancreáticas/diagnóstico por imagem , Pancreatopatias/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
12.
Comput Biol Med ; 175: 108549, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38704901

RESUMO

In this paper, we propose a multi-task learning (MTL) network based on the label-level fusion of metadata and hand-crafted features by unsupervised clustering to generate new clustering labels as an optimization goal. We propose a MTL module (MTLM) that incorporates an attention mechanism to enable the model to learn more integrated, variable information. We propose a dynamic strategy to adjust the loss weights of different tasks, and trade off the contributions of multiple branches. Instead of feature-level fusion, we propose label-level fusion and combine the results of our proposed MTLM with the results of the image classification network to achieve better lesion prediction on multiple dermatological datasets. We verify the effectiveness of the proposed model by quantitative and qualitative measures. The MTL network using multi-modal clues and label-level fusion can yield the significant performance improvement for skin lesion classification.


Assuntos
Pele , Humanos , Pele/diagnóstico por imagem , Pele/patologia , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Redes Neurais de Computação , Algoritmos , Dermatopatias/diagnóstico por imagem
13.
Skin Res Technol ; 30(5): e13607, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38742379

RESUMO

BACKGROUND: Timely diagnosis plays a critical role in determining melanoma prognosis, prompting the development of deep learning models to aid clinicians. Questions persist regarding the efficacy of clinical images alone or in conjunction with dermoscopy images for model training. This study aims to compare the classification performance for melanoma of three types of CNN models: those trained on clinical images, dermoscopy images, and a combination of paired clinical and dermoscopy images from the same lesion. MATERIALS AND METHODS: We divided 914 image pairs into training, validation, and test sets. Models were built using pre-trained Inception-ResNetV2 convolutional layers for feature extraction, followed by binary classification. Training comprised 20 models per CNN type using sets of random hyperparameters. Best models were chosen based on validation AUC-ROC. RESULTS: Significant AUC-ROC differences were found between clinical versus dermoscopy models (0.661 vs. 0.869, p < 0.001) and clinical versus clinical + dermoscopy models (0.661 vs. 0.822, p = 0.001). Significant sensitivity differences were found between clinical and dermoscopy models (0.513 vs. 0.799, p = 0.01), dermoscopy versus clinical + dermoscopy models (0.799 vs. 1.000, p = 0.02), and clinical versus clinical + dermoscopy models (0.513 vs. 1.000, p < 0.001). Significant specificity differences were found between dermoscopy versus clinical + dermoscopy models (0.800 vs. 0.288, p < 0.001) and clinical versus clinical + dermoscopy models (0.650 vs. 0.288, p < 0.001). CONCLUSION: CNN models trained on dermoscopy images outperformed those relying solely on clinical images under our study conditions. The potential advantages of incorporating paired clinical and dermoscopy images for CNN-based melanoma classification appear less clear based on our findings.


Assuntos
Dermoscopia , Melanoma , Redes Neurais de Computação , Neoplasias Cutâneas , Humanos , Melanoma/diagnóstico por imagem , Melanoma/patologia , Melanoma/classificação , Dermoscopia/métodos , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Neoplasias Cutâneas/classificação , Aprendizado Profundo , Sensibilidade e Especificidade , Feminino , Curva ROC , Interpretação de Imagem Assistida por Computador/métodos , Masculino
14.
Neuroimaging Clin N Am ; 34(2): 281-292, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38604712

RESUMO

MR imaging's exceptional capabilities in vascular imaging stem from its ability to visualize and quantify vessel wall features, such as plaque burden, composition, and biomechanical properties. The application of advanced MR imaging techniques, including two-dimensional and three-dimensional black-blood MR imaging, T1 and T2 relaxometry, diffusion-weighted imaging, and dynamic contrast-enhanced MR imaging, wall shear stress, and arterial stiffness, empowers clinicians and researchers to explore the intricacies of vascular diseases. This array of techniques provides comprehensive insights into the development and progression of vascular pathologies, facilitating earlier diagnosis, targeted treatment, and improved patient outcomes in the management of vascular health.


Assuntos
Imagem de Difusão por Ressonância Magnética , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Imageamento Tridimensional/métodos , Interpretação de Imagem Assistida por Computador/métodos
15.
Comput Methods Programs Biomed ; 249: 108160, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38583290

RESUMO

BACKGROUND AND OBJECTIVE: Early detection and grading of Diabetic Retinopathy (DR) is essential to determine an adequate treatment and prevent severe vision loss. However, the manual analysis of fundus images is time consuming and DR screening programs are challenged by the availability of human graders. Current automatic approaches for DR grading attempt the joint detection of all signs at the same time. However, the classification can be optimized if red lesions and bright lesions are independently processed since the task gets divided and simplified. Furthermore, clinicians would greatly benefit from explainable artificial intelligence (XAI) to support the automatic model predictions, especially when the type of lesion is specified. As a novelty, we propose an end-to-end deep learning framework for automatic DR grading (5 severity degrees) based on separating the attention of the dark structures from the bright structures of the retina. As the main contribution, this approach allowed us to generate independent interpretable attention maps for red lesions, such as microaneurysms and hemorrhages, and bright lesions, such as hard exudates, while using image-level labels only. METHODS: Our approach is based on a novel attention mechanism which focuses separately on the dark and the bright structures of the retina by performing a previous image decomposition. This mechanism can be seen as a XAI approach which generates independent attention maps for red lesions and bright lesions. The framework includes an image quality assessment stage and deep learning-related techniques, such as data augmentation, transfer learning and fine-tuning. We used the architecture Xception as a feature extractor and the focal loss function to deal with data imbalance. RESULTS: The Kaggle DR detection dataset was used for method development and validation. The proposed approach achieved 83.7 % accuracy and a Quadratic Weighted Kappa of 0.78 to classify DR among 5 severity degrees, which outperforms several state-of-the-art approaches. Nevertheless, the main result of this work is the generated attention maps, which reveal the pathological regions on the image distinguishing the red lesions and the bright lesions. These maps provide explainability to the model predictions. CONCLUSIONS: Our results suggest that our framework is effective to automatically grade DR. The separate attention approach has proven useful for optimizing the classification. On top of that, the obtained attention maps facilitate visual interpretation for clinicians. Therefore, the proposed method could be a diagnostic aid for the early detection and grading of DR.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Fundo de Olho
16.
Comput Biol Med ; 175: 108368, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38663351

RESUMO

BACKGROUND: The issue of using deep learning to obtain accurate gross tumor volume (GTV) and metastatic lymph nodes (MLN) segmentation for nasopharyngeal carcinoma (NPC) on heterogeneous magnetic resonance imaging (MRI) images with limited labeling remains unsolved. METHOD: We collected 918 patients with MRI images from three hospitals to develop and validate models and proposed a semi-supervised framework for the fine delineation of multi-center NPC boundaries by integrating uncertainty-based implicit neural representations named SIMN. The framework utilizes the deep mutual learning approach with CNN and Transformer, incorporating dynamic thresholds. Additionally, domain adaptive algorithms are employed to enhance the performance. RESULTS: SIMN predictions have a high overlap ratio with the ground truth. Under the 20 % labeled cases, for the internal test cohorts, the average DSC in GTV and MLN are 0.7981 and 0.7804, respectively; for external test cohort Wu Zhou Red Cross Hospital, the average DSC in GTV and MLN are 0.7217 and 0.7581, respectively; for external test cohorts First People Hospital of Foshan, the average DSC in GTV and MLN are 0.7004 and 0.7692, respectively. No significant differences are found in DSC, HD95, ASD, and Recall for patients with different clinical categories. Moreover, SIMN outperformed existing classical semi-supervised methods. CONCLUSIONS: SIMN showed a highly accurate GTV and MLN segmentation for NPC on multi-center MRI images under Semi-Supervised Learning (SSL), which can easily transfer to other centers without fine-tuning. It suggests that it has the potential to act as a generalized delineation solution for heterogeneous MRI images with limited labels in clinical deployment.


Assuntos
Imageamento por Ressonância Magnética , Carcinoma Nasofaríngeo , Neoplasias Nasofaríngeas , Humanos , Imageamento por Ressonância Magnética/métodos , Carcinoma Nasofaríngeo/diagnóstico por imagem , Neoplasias Nasofaríngeas/diagnóstico por imagem , Masculino , Feminino , Pessoa de Meia-Idade , Adulto , Aprendizado Profundo , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
17.
Eur J Radiol ; 175: 111442, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38583349

RESUMO

OBJECTIVES: Background parenchymal enhancement (BPE) on dynamic contrast-enhanced MRI (DCE-MRI) as rated by radiologists is subject to inter- and intrareader variability. We aim to automate BPE category from DCE-MRI. METHODS: This study represents a secondary analysis of the Dense Tissue and Early Breast Neoplasm Screening trial. 4553 women with extremely dense breasts who received supplemental breast MRI screening in eight hospitals were included. Minimal, mild, moderate and marked BPE rated by radiologists were used as reference. Fifteen quantitative MRI features of the fibroglandular tissue were extracted to predict BPE using Random Forest, Naïve Bayes, and KNN classifiers. Majority voting was used to combine the predictions. Internal-external validation was used for training and validation. The inverse-variance weighted mean accuracy was used to express mean performance across the eight hospitals. Cox regression was used to verify non inferiority of the association between automated rating and breast cancer occurrence compared to the association for manual rating. RESULTS: The accuracy of majority voting ranged between 0.56 and 0.84 across the eight hospitals. The weighted mean prediction accuracy for the four BPE categories was 0.76. The hazard ratio (HR) of BPE for breast cancer occurrence was comparable between automated rating and manual rating (HR = 2.12 versus HR = 1.97, P = 0.65 for mild/moderate/marked BPE relative to minimal BPE). CONCLUSION: It is feasible to rate BPE automatically in DCE-MRI of women with extremely dense breasts without compromising the underlying association between BPE and breast cancer occurrence. The accuracy for minimal BPE is superior to that for other BPE categories.


Assuntos
Densidade da Mama , Neoplasias da Mama , Meios de Contraste , Imageamento por Ressonância Magnética , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Aumento da Imagem/métodos , Detecção Precoce de Câncer/métodos , Idoso , Mama/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos
18.
Eur J Radiol ; 175: 111451, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38593573

RESUMO

PURPOSE: To evaluate a deep learning reconstruction for turbo spin echo (DLR-TSE) sequence of ankle magnetic resonance imaging (MRI) in terms of acquisition time, image quality, and lesion detectability by comparing with conventional TSE. METHODS: Between March 2023 and May 2023, patients with an indication for ankle MRI were prospectively enrolled. Each patient underwent a conventional TSE protocol and a prospectively undersampled DLR-TSE protocol. Four experienced radiologists independently assessed image quality using a 5-point scale and reviewed structural abnormalities. Image quality assessment included overall image quality, differentiation of anatomic details, diagnostic confidence, artifacts, and noise. Interchangeability analysis was performed to evaluate the equivalence of DLR-TSE relative to conventional TSE for detection of structural pathologies. RESULTS: In total, 56 patients were included (mean age, 32.6 ± 10.6 years; 35 men). The DLR-TSE (233 s) protocol enabled a 57.4 % reduction in total acquisition time, compared with the conventional TSE protocol (547 s). DLR-TSE images had superior overall image quality, fewer artifacts, and less noise (all P < 0.05), compared with conventional TSE images, according to mean ratings by the four readers. Differentiation of anatomic details, diagnostic confidence, and assessments of structural abnormalities showed no differences between the two techniques (P > 0.05). Furthermore, DLR-TSE demonstrated diagnostic equivalence with conventional TSE, based on interchangeability analysis involving all analyzed structural abnormalities. CONCLUSION: DLR can prospectively accelerate conventional TSE to a level comparable with a 4-minute comprehensive examination of the ankle, while providing superior image quality and similar lesion detectability in clinical practice.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Masculino , Feminino , Imageamento por Ressonância Magnética/métodos , Adulto , Estudos Prospectivos , Articulação do Tornozelo/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Pessoa de Meia-Idade , Tornozelo/diagnóstico por imagem , Artefatos
19.
J Comput Assist Tomogr ; 48(3): 343-353, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38595087

RESUMO

PURPOSE: Accurate quantification of liver iron concentration (LIC) can be achieved via magnetic resonance imaging (MRI). Maps of liver T2*/R2* are provided by commercially available, vendor-provided, 3-dimensional (3D) multiecho Dixon sequences and allow automated, inline postprocessing, which removes the need for manual curve fitting associated with conventional 2-dimensional (2D) gradient echo (GRE)-based postprocessing. The main goal of our study was to investigate the relationship among LIC estimates generated by 3D multiecho Dixon sequence to values generated by 2D GRE-based R2* relaxometry as the reference standard. METHODS: A retrospective review of patients who had undergone MRI scans for estimation of LIC with conventional T2* relaxometry and 3D multiecho Dixon sequences was performed. A 1.5 T scanner was used to acquire the magnetic resonance studies. Acquisition of standard multislice multiecho T2*-based sequences was performed, and R2* values with corresponding LIC were estimated. The comparison between R2* and corresponding LIC estimates obtained by the 2 methods was analyzed via the correlation coefficients and Bland-Altman difference plots. RESULTS: This study included 104 patients (51 male and 53 female patients) with 158 MRI scans. The mean age of the patients at the time of scan was 15.2 (SD, 8.8) years. There was a very strong correlation between the 2 LIC estimation methods for LIC values up to 3.2 mg/g (LIC quantitative multiecho Dixon [qDixon; from region of interest R2*] vs LIC GRE [in-house]: r = 0.83, P < 0.01; LIC qDixon [from segmentation volume R2*] vs LIC GRE [in-house]: r = 0.92, P < 0.01); and very weak correlation between the 2 methods at liver iron levels >7 mg/g. CONCLUSION: Three-dimensional-based multiecho Dixon technique can accurately measure LIC up to 7 mg/g and has the potential to replace 2D GRE-based relaxometry methods.


Assuntos
Imageamento Tridimensional , Sobrecarga de Ferro , Fígado , Imageamento por Ressonância Magnética , Humanos , Feminino , Masculino , Sobrecarga de Ferro/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Estudos Retrospectivos , Adulto , Imageamento Tridimensional/métodos , Fígado/diagnóstico por imagem , Pessoa de Meia-Idade , Adulto Jovem , Idoso , Interpretação de Imagem Assistida por Computador/métodos , Adolescente , Reprodutibilidade dos Testes , Ferro
20.
Eur J Radiol ; 175: 111452, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38604092

RESUMO

OBJECTIVE: To investigate the potential value of quantitative parameters derived from synthetic magnetic resonance imaging (syMRI) for discriminating axillary lymph nodes metastasis (ALNM) in breast cancer patients. MATERIALS AND METHODS: A total of 56 females with histopathologically proven invasive breast cancer who underwent both conventional breast MRI and additional syMRI examinations were enrolled in this study, including 30 patients with ALNM and 26 with non-ALNM. SyMRI has enabled quantification of T1 relaxation time (T1), T2 relaxation time (T2) and proton density (PD). The syMRI quantitative parameters of breast primary tumors before (T1tumor, T2tumor, PDtumor) and after (T1+tumor, T2+tumor, PD+tumor) contrast agent injection were obtained. Similarly, measurements were taken for axillary lymph nodes before (T1LN, T2LN, PDLN) and after (T1+LN, T2+LN, PD+LN) the injection, then theΔT1 (T1-T1+), ΔT2 (T2-T2+), ΔPD (PD-PD+), T1/T2 and T1+/T2+ were calculated. All parameters were compared between ANLM and non-ALNM group. Intraclass correlation coefficient for assessing interobserver agreement. The independent Student's t test or Mann-Whitney U test to determine the relationship between the mean quantitative values and the ALNM. Multivariate logistic regression analyses followed by receiver operating characteristics (ROC) analysis for discriminating ALN status. A P value < 0.05 was considered statistically significant. RESULTS: The short-diameter of lymph nodes (DLN) in ALNM group was significantly longer than that in the non-ALNM group (10.22 ± 3.58 mm vs. 5.28 ± 1.39 mm, P < 0.001). The optimal cutoff value was determined to be 5.78 mm, with an AUC of 0.894 (95 % CI: 0.838-0.939), a sensitivity of 86.7 %, and a specificity of 90.2 %. In syMRI quantitative parameters of breast tumors, T2tumor, ΔT2tumor and ΔPDtumor values showed statistically significant differences between the two groups (P < 0.05). T2tumor value had the best performance in discriminating ALN status (AUC = 0.712), and the optimal cutoff was 90.12 ms, the sensitivity and specificity were 65.0 % and 83.6 % respectively. In terms of syMRI quantitative parameters of lymph nodes, T1LN, T2LN, T1LN/T2LN, T2+LN and ΔT1LN values were significantly different between the two groups (P < 0.05), and their AUCs were 0.785, 0.840, 0.886, 0.702 and 0.754, respectively. Multivariate analyses indicated that the T1LN value was the only independent predictor of ALNM (OR=1.426, 95 % CI: 1.130-1.798, P = 0.039). The diagnostic sensitivity and specificity of T1LN was 86.7 % and 69.4 % respectively at the best cutoff point of 1371.00 ms. The combination of T1LN, T2LN, T1LN/T2LN, ΔT1LN and DLN had better performance for differentiating ALNM and non-ALNM, with AUCs of 0.905, 0.957, 0.964 and 0.897, respectively. CONCLUSION: The quantitative parameters derived from syMRI have certain value for discriminating ALN status in invasive breast cancer, with T2tumor showing the highest diagnostic efficiency among breast lesions parameters. Moreover, T1LN acted as an independent predictor of ALNM.


Assuntos
Axila , Neoplasias da Mama , Linfonodos , Metástase Linfática , Imageamento por Ressonância Magnética , Sensibilidade e Especificidade , Humanos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Feminino , Axila/diagnóstico por imagem , Pessoa de Meia-Idade , Metástase Linfática/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Linfonodos/diagnóstico por imagem , Linfonodos/patologia , Adulto , Idoso , Reprodutibilidade dos Testes , Invasividade Neoplásica/diagnóstico por imagem , Meios de Contraste , Interpretação de Imagem Assistida por Computador/métodos , Aumento da Imagem/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA