Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 14.597
Filtrar
Mais filtros











Intervalo de ano de publicação
1.
Curr Med Imaging ; 20(1): e15734056269264, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38766836

RESUMO

BACKGROUND: Currently, it is difficult to find a solution to the inverse inappropriate problem, which involves restoring a high-resolution image from a lowresolution image contained within a single image. In nature photography, one can capture a wide variety of objects and textures, each with its own characteristics, most notably the high-frequency component. These qualities can be distinguished from each other by looking at the pictures. OBJECTIVE: The goal is to develop an automated approach to identify thyroid nodules on ultrasound images. The aim of this research is to accurately differentiate thyroid nodules using Deep Learning Technique and to evaluate the effectiveness of different localization techniques. METHODS: The method used in this research is to reconstruct a single super-resolution image based on segmentation and classification. The poor-quality ultrasound image is divided into several parts, and the best applicable classification is chosen for each component. Pairs of high- and lowresolution images belonging to the same class are found and used to figure out which image is high-resolution for each segment. Deep learning technology, specifically the Adam classifier, is used to identify carcinoid tumors within thyroid nodules. Measures, such as localization accuracy, sensitivity, specificity, dice loss, ROC, and area under the curve (AUC), are used to evaluate the effectiveness of the techniques. RESULTS: The results of the proposed method are superior, both statistically and qualitatively, compared to other methods that are considered one of the latest and best technologies. The developed automated approach shows promising results in accurately identifying thyroid nodules on ultrasound images. CONCLUSION: The research demonstrates the development of an automated approach to identify thyroid nodules within ultrasound images using super-resolution single-image reconstruction and deep learning technology. The results indicate that the proposed method is superior to the latest and best techniques in terms of accuracy and quality. This research contributes to the advancement of medical imaging and holds the potential to improve the diagnosis and treatment of thyroid nodules.

.


Assuntos
Aprendizado Profundo , Nódulo da Glândula Tireoide , Ultrassonografia , Humanos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Ultrassonografia/métodos , Glândula Tireoide/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
2.
Skin Res Technol ; 30(5): e13607, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38742379

RESUMO

BACKGROUND: Timely diagnosis plays a critical role in determining melanoma prognosis, prompting the development of deep learning models to aid clinicians. Questions persist regarding the efficacy of clinical images alone or in conjunction with dermoscopy images for model training. This study aims to compare the classification performance for melanoma of three types of CNN models: those trained on clinical images, dermoscopy images, and a combination of paired clinical and dermoscopy images from the same lesion. MATERIALS AND METHODS: We divided 914 image pairs into training, validation, and test sets. Models were built using pre-trained Inception-ResNetV2 convolutional layers for feature extraction, followed by binary classification. Training comprised 20 models per CNN type using sets of random hyperparameters. Best models were chosen based on validation AUC-ROC. RESULTS: Significant AUC-ROC differences were found between clinical versus dermoscopy models (0.661 vs. 0.869, p < 0.001) and clinical versus clinical + dermoscopy models (0.661 vs. 0.822, p = 0.001). Significant sensitivity differences were found between clinical and dermoscopy models (0.513 vs. 0.799, p = 0.01), dermoscopy versus clinical + dermoscopy models (0.799 vs. 1.000, p = 0.02), and clinical versus clinical + dermoscopy models (0.513 vs. 1.000, p < 0.001). Significant specificity differences were found between dermoscopy versus clinical + dermoscopy models (0.800 vs. 0.288, p < 0.001) and clinical versus clinical + dermoscopy models (0.650 vs. 0.288, p < 0.001). CONCLUSION: CNN models trained on dermoscopy images outperformed those relying solely on clinical images under our study conditions. The potential advantages of incorporating paired clinical and dermoscopy images for CNN-based melanoma classification appear less clear based on our findings.


Assuntos
Dermoscopia , Melanoma , Redes Neurais de Computação , Neoplasias Cutâneas , Humanos , Melanoma/diagnóstico por imagem , Melanoma/patologia , Melanoma/classificação , Dermoscopia/métodos , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Neoplasias Cutâneas/classificação , Aprendizado Profundo , Sensibilidade e Especificidade , Feminino , Curva ROC , Interpretação de Imagem Assistida por Computador/métodos , Masculino
3.
Arch Dermatol Res ; 316(6): 275, 2024 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-38796546

RESUMO

PURPOSE: A skin lesion refers to an area of the skin that exhibits anomalous growth or distinctive visual characteristics compared to the surrounding skin. Benign skin lesions are noncancerous and generally pose no threat. These irregular skin growths can vary in appearance. On the other hand, malignant skin lesions correspond to skin cancer, which happens to be the most prevalent form of cancer in the United States. Skin cancer involves the unusual proliferation of skin cells anywhere on the body. The conventional method for detecting skin cancer is relatively more painful. METHODS: This work involves the automated prediction of skin cancer and its types using two stage Convolutional Neural Network (CNN). The first stage of CNN extracts low level features and second stage extracts high level features. Feature selection is done using these two CNN and ABCD (Asymmetry, Border irregularity, Colour variation, and Diameter) technique. The features extracted from the two CNNs are fused with ABCD features and fed into classifiers for the final prediction. The classifiers employed in this work include ensemble learning methods such as gradient boosting and XG boost, as well as machine learning classifiers like decision trees and logistic regression. This methodology is evaluated using the International Skin Imaging Collaboration (ISIC) 2018 and 2019 dataset. RESULTS: As a result, the first stage CNN which is used for creation of new dataset achieved an accuracy of 97.92%. Second stage CNN which is used for feature selection achieved an accuracy of 98.86%. Classification results are obtained for both with and without fusion of features. CONCLUSION: Therefore, two stage prediction model achieved better results with feature fusion.


Assuntos
Melanoma , Redes Neurais de Computação , Neoplasias Cutâneas , Humanos , Melanoma/diagnóstico , Melanoma/patologia , Neoplasias Cutâneas/diagnóstico , Neoplasias Cutâneas/patologia , Pele/patologia , Pele/diagnóstico por imagem , Aprendizado de Máquina , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Melanoma Maligno Cutâneo , Dermoscopia/métodos
4.
Sci Rep ; 14(1): 11678, 2024 05 22.
Artigo em Inglês | MEDLINE | ID: mdl-38778219

RESUMO

Polyps are abnormal tissue clumps growing primarily on the inner linings of the gastrointestinal tract. While such clumps are generally harmless, they can potentially evolve into pathological tumors, and thus require long-term observation and monitoring. Polyp segmentation in gastrointestinal endoscopy images is an important stage for polyp monitoring and subsequent treatment. However, this segmentation task faces multiple challenges: the low contrast of the polyp boundaries, the varied polyp appearance, and the co-occurrence of multiple polyps. So, in this paper, an implicit edge-guided cross-layer fusion network (IECFNet) is proposed for polyp segmentation. The codec pair is used to generate an initial saliency map, the implicit edge-enhanced context attention module aggregates the feature graph output from the encoding and decoding to generate the rough prediction, and the multi-scale feature reasoning module is used to generate final predictions. Polyp segmentation experiments have been conducted on five popular polyp image datasets (Kvasir, CVC-ClinicDB, ETIS, CVC-ColonDB, and CVC-300), and the experimental results show that the proposed method significantly outperforms a conventional method, especially with an accuracy margin of 7.9% on the ETIS dataset.


Assuntos
Pólipos do Colo , Humanos , Pólipos do Colo/patologia , Pólipos do Colo/diagnóstico por imagem , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Interpretação de Imagem Assistida por Computador/métodos , Pólipos/patologia , Pólipos/diagnóstico por imagem , Endoscopia Gastrointestinal/métodos
5.
Tomography ; 10(5): 705-726, 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38787015

RESUMO

With the increasing dominance of artificial intelligence (AI) techniques, the important prospects for their application have extended to various medical fields, including domains such as in vitro diagnosis, intelligent rehabilitation, medical imaging, and prognosis. Breast cancer is a common malignancy that critically affects women's physical and mental health. Early breast cancer screening-through mammography, ultrasound, or magnetic resonance imaging (MRI)-can substantially improve the prognosis for breast cancer patients. AI applications have shown excellent performance in various image recognition tasks, and their use in breast cancer screening has been explored in numerous studies. This paper introduces relevant AI techniques and their applications in the field of medical imaging of the breast (mammography and ultrasound), specifically in terms of identifying, segmenting, and classifying lesions; assessing breast cancer risk; and improving image quality. Focusing on medical imaging for breast cancer, this paper also reviews related challenges and prospects for AI.


Assuntos
Inteligência Artificial , Neoplasias da Mama , Mama , Mamografia , Humanos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Mamografia/métodos , Mama/diagnóstico por imagem , Mama/patologia , Detecção Precoce de Câncer/métodos , Imageamento por Ressonância Magnética/métodos , Ultrassonografia Mamária/métodos , Interpretação de Imagem Assistida por Computador/métodos
6.
Comput Biol Med ; 175: 108412, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38691914

RESUMO

Brain tumor segmentation and classification play a crucial role in the diagnosis and treatment planning of brain tumors. Accurate and efficient methods for identifying tumor regions and classifying different tumor types are essential for guiding medical interventions. This study comprehensively reviews brain tumor segmentation and classification techniques, exploring various approaches based on image processing, machine learning, and deep learning. Furthermore, our study aims to review existing methodologies, discuss their advantages and limitations, and highlight recent advancements in this field. The impact of existing segmentation and classification techniques for automated brain tumor detection is also critically examined using various open-source datasets of Magnetic Resonance Images (MRI) of different modalities. Moreover, our proposed study highlights the challenges related to segmentation and classification techniques and datasets having various MRI modalities to enable researchers to develop innovative and robust solutions for automated brain tumor detection. The results of this study contribute to the development of automated and robust solutions for analyzing brain tumors, ultimately aiding medical professionals in making informed decisions and providing better patient care.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos , Neuroimagem/métodos
7.
Comput Methods Programs Biomed ; 250: 108205, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38703435

RESUMO

The pancreas is a vital organ in digestive system which has significant health implications. It is imperative to evaluate and identify malignant pancreatic lesions promptly in light of the high mortality rate linked to such malignancies. Endoscopic Ultrasound (EUS) is a non-invasive precise technique to detect pancreas disorders, but it is highly operator dependent. Artificial intelligence (AI), including traditional machine learning (ML) and deep learning (DL) techniques can play a pivotal role to enhancing the performance of EUS regardless of operator. AI performs a critical function in the detection, classification, and segmentation of medical images. The utilization of AI-assisted systems has improved the accuracy and productivity of pancreatic analysis, including the detection of diverse pancreatic disorders (e.g., pancreatitis, masses, and cysts) as well as landmarks and parenchyma. This systematic review examines the rapidly developing domain of AI-assisted system in EUS of the pancreas. Its objective is to present a thorough study of the present research status and developments in this area. This paper explores the significant challenges of AI-assisted system in pancreas EUS imaging, highlights the potential of AI techniques in addressing these challenges, and suggests the scope for future research in domain of AI-assisted EUS systems.


Assuntos
Inteligência Artificial , Endossonografia , Pâncreas , Humanos , Endossonografia/métodos , Pâncreas/diagnóstico por imagem , Aprendizado de Máquina , Aprendizado Profundo , Neoplasias Pancreáticas/diagnóstico por imagem , Pancreatopatias/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
8.
Comput Biol Med ; 175: 108549, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38704901

RESUMO

In this paper, we propose a multi-task learning (MTL) network based on the label-level fusion of metadata and hand-crafted features by unsupervised clustering to generate new clustering labels as an optimization goal. We propose a MTL module (MTLM) that incorporates an attention mechanism to enable the model to learn more integrated, variable information. We propose a dynamic strategy to adjust the loss weights of different tasks, and trade off the contributions of multiple branches. Instead of feature-level fusion, we propose label-level fusion and combine the results of our proposed MTLM with the results of the image classification network to achieve better lesion prediction on multiple dermatological datasets. We verify the effectiveness of the proposed model by quantitative and qualitative measures. The MTL network using multi-modal clues and label-level fusion can yield the significant performance improvement for skin lesion classification.


Assuntos
Pele , Humanos , Pele/diagnóstico por imagem , Pele/patologia , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Redes Neurais de Computação , Algoritmos , Dermatopatias/diagnóstico por imagem
9.
BMC Med Imaging ; 24(1): 107, 2024 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-38734629

RESUMO

This study addresses the critical challenge of detecting brain tumors using MRI images, a pivotal task in medical diagnostics that demands high accuracy and interpretability. While deep learning has shown remarkable success in medical image analysis, there remains a substantial need for models that are not only accurate but also interpretable to healthcare professionals. The existing methodologies, predominantly deep learning-based, often act as black boxes, providing little insight into their decision-making process. This research introduces an integrated approach using ResNet50, a deep learning model, combined with Gradient-weighted Class Activation Mapping (Grad-CAM) to offer a transparent and explainable framework for brain tumor detection. We employed a dataset of MRI images, enhanced through data augmentation, to train and validate our model. The results demonstrate a significant improvement in model performance, with a testing accuracy of 98.52% and precision-recall metrics exceeding 98%, showcasing the model's effectiveness in distinguishing tumor presence. The application of Grad-CAM provides insightful visual explanations, illustrating the model's focus areas in making predictions. This fusion of high accuracy and explainability holds profound implications for medical diagnostics, offering a pathway towards more reliable and interpretable brain tumor detection tools.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos
10.
Radiology ; 311(2): e230750, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38713024

RESUMO

Background Multiparametric MRI (mpMRI) improves prostate cancer (PCa) detection compared with systematic biopsy, but its interpretation is prone to interreader variation, which results in performance inconsistency. Artificial intelligence (AI) models can assist in mpMRI interpretation, but large training data sets and extensive model testing are required. Purpose To evaluate a biparametric MRI AI algorithm for intraprostatic lesion detection and segmentation and to compare its performance with radiologist readings and biopsy results. Materials and Methods This secondary analysis of a prospective registry included consecutive patients with suspected or known PCa who underwent mpMRI, US-guided systematic biopsy, or combined systematic and MRI/US fusion-guided biopsy between April 2019 and September 2022. All lesions were prospectively evaluated using Prostate Imaging Reporting and Data System version 2.1. The lesion- and participant-level performance of a previously developed cascaded deep learning algorithm was compared with histopathologic outcomes and radiologist readings using sensitivity, positive predictive value (PPV), and Dice similarity coefficient (DSC). Results A total of 658 male participants (median age, 67 years [IQR, 61-71 years]) with 1029 MRI-visible lesions were included. At histopathologic analysis, 45% (294 of 658) of participants had lesions of International Society of Urological Pathology (ISUP) grade group (GG) 2 or higher. The algorithm identified 96% (282 of 294; 95% CI: 94%, 98%) of all participants with clinically significant PCa, whereas the radiologist identified 98% (287 of 294; 95% CI: 96%, 99%; P = .23). The algorithm identified 84% (103 of 122), 96% (152 of 159), 96% (47 of 49), 95% (38 of 40), and 98% (45 of 46) of participants with ISUP GG 1, 2, 3, 4, and 5 lesions, respectively. In the lesion-level analysis using radiologist ground truth, the detection sensitivity was 55% (569 of 1029; 95% CI: 52%, 58%), and the PPV was 57% (535 of 934; 95% CI: 54%, 61%). The mean number of false-positive lesions per participant was 0.61 (range, 0-3). The lesion segmentation DSC was 0.29. Conclusion The AI algorithm detected cancer-suspicious lesions on biparametric MRI scans with a performance comparable to that of an experienced radiologist. Moreover, the algorithm reliably predicted clinically significant lesions at histopathologic examination. ClinicalTrials.gov Identifier: NCT03354416 © RSNA, 2024 Supplemental material is available for this article.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Idoso , Estudos Prospectivos , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Pessoa de Meia-Idade , Algoritmos , Próstata/diagnóstico por imagem , Próstata/patologia , Biópsia Guiada por Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
11.
Ther Adv Respir Dis ; 18: 17534666241253694, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38803144

RESUMO

BACKGROUND: Given the rarity of tracheobronchopathia osteochondroplastica (TO), many young doctors in primary hospitals are unable to identify TO based on bronchoscopy findings. OBJECTIVES: To build an artificial intelligence (AI) model for differentiating TO from other multinodular airway diseases by using bronchoscopic images. DESIGN: We designed the study by comparing the imaging data of patients undergoing bronchoscopy from January 2010 to October 2022 by using EfficientNet. Bronchoscopic images of 21 patients with TO at Anhui Chest Hospital from October 2019 to October 2022 were collected for external validation. METHODS: Bronchoscopic images of patients with multinodular airway lesions (including TO, amyloidosis, tumors, and inflammation) and without airway lesions in the First Affiliated Hospital of Guangzhou Medical University were collected. The images were randomized (4:1) into training and validation groups based on different diseases and utilized for deep learning by convolutional neural networks (CNNs). RESULTS: We enrolled 201 patients with multinodular airway disease (38, 15, 75, and 73 patients with TO, amyloidosis, tumors, and inflammation, respectively) and 213 without any airway lesions. To find multinodular lesion images for deep learning, we utilized 2183 bronchoscopic images of multinodular lesions (including TO, amyloidosis, tumor, and inflammation) and compared them with images without any airway lesions (1733). The accuracy of multinodular lesion identification was 98.9%. Further, the accuracy of TO detection based on the bronchoscopic images of multinodular lesions was 89.2%. Regarding external validation (using images from 21 patients with TO), all patients could be diagnosed with TO; the accuracy was 89.8%. CONCLUSION: We built an AI model that could differentiate TO from other multinodular airway diseases (mainly amyloidosis, tumors, and inflammation) by using bronchoscopic images. The model could help young physicians identify this rare airway disease.


Assuntos
Broncoscopia , Osteocondrodisplasias , Valor Preditivo dos Testes , Doenças da Traqueia , Humanos , Doenças da Traqueia/diagnóstico por imagem , Doenças da Traqueia/patologia , Doenças da Traqueia/diagnóstico , Pessoa de Meia-Idade , Masculino , Feminino , Adulto , Diagnóstico Diferencial , Osteocondrodisplasias/diagnóstico por imagem , Osteocondrodisplasias/diagnóstico , Osteocondrodisplasias/patologia , Reprodutibilidade dos Testes , Aprendizado Profundo , Idoso , China , Interpretação de Imagem Assistida por Computador , Redes Neurais de Computação , Inteligência Artificial
12.
BMC Med Imaging ; 24(1): 110, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38750436

RESUMO

Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Aprendizado de Máquina , Interpretação de Imagem Assistida por Computador/métodos
13.
Artigo em Inglês | MEDLINE | ID: mdl-38765504

RESUMO

Objective: To compare the medical image interpretation's time between the conventional and automated methods of breast ultrasound in patients with breast lesions. Secondarily, to evaluate the agreement between the two methods and interobservers. Methods: This is a cross-sectional study with prospective data collection. The agreement's degrees were established in relation to the breast lesions's ultrasound descriptors. To determine the accuracy of each method, a biopsy of suspicious lesions was performed, considering the histopathological result as the diagnostic gold standard. Results: We evaluated 27 women. Conventional ultrasound used an average medical time of 10.77 minutes (± 2.55) greater than the average of 7.38 minutes (± 2.06) for automated ultrasound (p<0.001). The degrees of agreement between the methods ranged from 0.75 to 0.95 for researcher 1 and from 0.71 to 0.98 for researcher 2. Among the researchers, the degrees of agreement were between 0.63 and 1 for automated ultrasound and between 0.68 and 1 for conventional ultrasound. The area of the ROC curve for the conventional method was 0.67 (p=0.003) for researcher 1 and 0.72 (p<0.001) for researcher 2. The area of the ROC curve for the automated method was 0. 69 (p=0.001) for researcher 1 and 0.78 (p<0.001) for researcher 2. Conclusion: We observed less time devoted by the physician to automated ultrasound compared to conventional ultrasound, maintaining accuracy. There was substantial or strong to perfect interobserver agreement and substantial or strong to almost perfect agreement between the methods.


Assuntos
Neoplasias da Mama , Ultrassonografia Mamária , Humanos , Feminino , Estudos Transversais , Ultrassonografia Mamária/métodos , Estudos Prospectivos , Adulto , Fatores de Tempo , Pessoa de Meia-Idade , Neoplasias da Mama/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador
14.
BMC Med Imaging ; 24(1): 118, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38773391

RESUMO

Brain tumor diagnosis using MRI scans poses significant challenges due to the complex nature of tumor appearances and variations. Traditional methods often require extensive manual intervention and are prone to human error, leading to misdiagnosis and delayed treatment. Current approaches primarily include manual examination by radiologists and conventional machine learning techniques. These methods rely heavily on feature extraction and classification algorithms, which may not capture the intricate patterns present in brain MRI images. Conventional techniques often suffer from limited accuracy and generalizability, mainly due to the high variability in tumor appearance and the subjective nature of manual interpretation. Additionally, traditional machine learning models may struggle with the high-dimensional data inherent in MRI images. To address these limitations, our research introduces a deep learning-based model utilizing convolutional neural networks (CNNs).Our model employs a sequential CNN architecture with multiple convolutional, max-pooling, and dropout layers, followed by dense layers for classification. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The precision, recall, and F1-scores ranging from 97 to 98% with a roc-auc ranging from 99 to 100% for each tumor category further substantiate the model's effectiveness. Additionally, the utilization of Grad-CAM visualizations provides insights into the model's decision-making process, enhancing interpretability. This research addresses the pressing need for enhanced diagnostic accuracy in identifying brain tumors through MRI imaging, tackling challenges such as variability in tumor appearance and the need for rapid, reliable diagnostic tools.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Masculino , Feminino
15.
Sci Rep ; 14(1): 11701, 2024 05 22.
Artigo em Inglês | MEDLINE | ID: mdl-38778034

RESUMO

Due to the lack of sufficient labeled data for the prostate and the extensive and complex semantic information in ultrasound images, accurately and quickly segmenting the prostate in transrectal ultrasound (TRUS) images remains a challenging task. In this context, this paper proposes a solution for TRUS image segmentation using an end-to-end bidirectional semantic constraint method, namely the BiSeC model. The experimental results show that compared with classic or popular deep learning methods, this method has better segmentation performance, with the Dice Similarity Coefficient (DSC) of 96.74% and the Intersection over Union (IoU) of 93.71%. Our model achieves a good balance between actual boundaries and noise areas, reducing costs while ensuring the accuracy and speed of segmentation.


Assuntos
Próstata , Neoplasias da Próstata , Semântica , Ultrassonografia , Masculino , Humanos , Ultrassonografia/métodos , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos
16.
Nat Med ; 30(5): 1481-1488, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38689062

RESUMO

The development of robust artificial intelligence models for echocardiography has been limited by the availability of annotated clinical data. Here, to address this challenge and improve the performance of cardiac imaging models, we developed EchoCLIP, a vision-language foundation model for echocardiography, that learns the relationship between cardiac ultrasound images and the interpretations of expert cardiologists across a wide range of patients and indications for imaging. After training on 1,032,975 cardiac ultrasound videos and corresponding expert text, EchoCLIP performs well on a diverse range of benchmarks for cardiac image interpretation, despite not having been explicitly trained for individual interpretation tasks. EchoCLIP can assess cardiac function (mean absolute error of 7.1% when predicting left ventricular ejection fraction in an external validation dataset) and identify implanted intracardiac devices (area under the curve (AUC) of 0.84, 0.92 and 0.97 for pacemakers, percutaneous mitral valve repair and artificial aortic valves, respectively). We also developed a long-context variant (EchoCLIP-R) using a custom tokenizer based on common echocardiography concepts. EchoCLIP-R accurately identified unique patients across multiple videos (AUC of 0.86), identified clinical transitions such as heart transplants (AUC of 0.79) and cardiac surgery (AUC 0.77) and enabled robust image-to-text search (mean cross-modal retrieval rank in the top 1% of candidate text reports). These capabilities represent a substantial step toward understanding and applying foundation models in cardiovascular imaging for preliminary interpretation of echocardiographic findings.


Assuntos
Ecocardiografia , Humanos , Ecocardiografia/métodos , Interpretação de Imagem Assistida por Computador , Inteligência Artificial
17.
Comput Biol Med ; 175: 108519, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38688128

RESUMO

Lung cancer has seriously threatened human health due to its high lethality and morbidity. Lung adenocarcinoma, in particular, is one of the most common subtypes of lung cancer. Pathological diagnosis is regarded as the gold standard for cancer diagnosis. However, the traditional manual screening of lung cancer pathology images is time consuming and error prone. Computer-aided diagnostic systems have emerged to solve this problem. Current research methods are unable to fully exploit the beneficial features inherent within patches, and they are characterized by high model complexity and significant computational effort. In this study, a deep learning framework called Multi-Scale Network (MSNet) is proposed for the automatic detection of lung adenocarcinoma pathology images. MSNet is designed to efficiently harness the valuable features within data patches, while simultaneously reducing model complexity, computational demands, and storage space requirements. The MSNet framework employs a dual data stream input method. In this input method, MSNet combines Swin Transformer and MLP-Mixer models to address global information between patches and the local information within each patch. Subsequently, MSNet uses the Multilayer Perceptron (MLP) module to fuse local and global features and perform classification to output the final detection results. In addition, a dataset of lung adenocarcinoma pathology images containing three categories is created for training and testing the MSNet framework. Experimental results show that the diagnostic accuracy of MSNet for lung adenocarcinoma pathology images is 96.55 %. In summary, MSNet has high classification performance and shows effectiveness and potential in the classification of lung adenocarcinoma pathology images.


Assuntos
Adenocarcinoma de Pulmão , Neoplasias Pulmonares , Redes Neurais de Computação , Humanos , Adenocarcinoma de Pulmão/diagnóstico por imagem , Adenocarcinoma de Pulmão/patologia , Adenocarcinoma de Pulmão/classificação , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Neoplasias Pulmonares/classificação , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Diagnóstico por Computador/métodos
18.
Comput Biol Med ; 175: 108368, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38663351

RESUMO

BACKGROUND: The issue of using deep learning to obtain accurate gross tumor volume (GTV) and metastatic lymph nodes (MLN) segmentation for nasopharyngeal carcinoma (NPC) on heterogeneous magnetic resonance imaging (MRI) images with limited labeling remains unsolved. METHOD: We collected 918 patients with MRI images from three hospitals to develop and validate models and proposed a semi-supervised framework for the fine delineation of multi-center NPC boundaries by integrating uncertainty-based implicit neural representations named SIMN. The framework utilizes the deep mutual learning approach with CNN and Transformer, incorporating dynamic thresholds. Additionally, domain adaptive algorithms are employed to enhance the performance. RESULTS: SIMN predictions have a high overlap ratio with the ground truth. Under the 20 % labeled cases, for the internal test cohorts, the average DSC in GTV and MLN are 0.7981 and 0.7804, respectively; for external test cohort Wu Zhou Red Cross Hospital, the average DSC in GTV and MLN are 0.7217 and 0.7581, respectively; for external test cohorts First People Hospital of Foshan, the average DSC in GTV and MLN are 0.7004 and 0.7692, respectively. No significant differences are found in DSC, HD95, ASD, and Recall for patients with different clinical categories. Moreover, SIMN outperformed existing classical semi-supervised methods. CONCLUSIONS: SIMN showed a highly accurate GTV and MLN segmentation for NPC on multi-center MRI images under Semi-Supervised Learning (SSL), which can easily transfer to other centers without fine-tuning. It suggests that it has the potential to act as a generalized delineation solution for heterogeneous MRI images with limited labels in clinical deployment.


Assuntos
Imageamento por Ressonância Magnética , Carcinoma Nasofaríngeo , Neoplasias Nasofaríngeas , Humanos , Imageamento por Ressonância Magnética/métodos , Carcinoma Nasofaríngeo/diagnóstico por imagem , Neoplasias Nasofaríngeas/diagnóstico por imagem , Masculino , Feminino , Pessoa de Meia-Idade , Adulto , Aprendizado Profundo , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
19.
Eur J Radiol ; 175: 111470, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38640822

RESUMO

PURPOSE: To explore diagnostic deep learning for optimizing the prostate MRI protocol by assessing the diagnostic efficacy of MRI sequences. METHOD: This retrospective study included 840 patients with a biparametric prostate MRI scan. The MRI protocol included a T2-weighted image, three DWI sequences (b50, b400, and b800 s/mm2), a calculated ADC map, and a calculated b1400 sequence. Two accelerated MRI protocols were simulated, using only two acquired b-values to calculate the ADC and b1400. Deep learning models were trained to detect prostate cancer lesions on accelerated and full protocols. The diagnostic performances of the protocols were compared on the patient-level with the area under the receiver operating characteristic (AUROC), using DeLong's test, and on the lesion-level with the partial area under the free response operating characteristic (pAUFROC), using a permutation test. Validation of the results was performed among expert radiologists. RESULTS: No significant differences in diagnostic performance were found between the accelerated protocols and the full bpMRI baseline. Omitting b800 reduced 53% DWI scan time, with a performance difference of + 0.01 AUROC (p = 0.20) and -0.03 pAUFROC (p = 0.45). Omitting b400 reduced 32% DWI scan time, with a performance difference of -0.01 AUROC (p = 0.65) and + 0.01 pAUFROC (p = 0.73). Multiple expert radiologists underlined the findings. CONCLUSIONS: This study shows that deep learning can assess the diagnostic efficacy of MRI sequences by comparing prostate MRI protocols on diagnostic accuracy. Omitting either the b400 or the b800 DWI sequence can optimize the prostate MRI protocol by reducing scan time without compromising diagnostic quality.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Neoplasias da Próstata , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Pessoa de Meia-Idade , Idoso , Interpretação de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
Comput Methods Programs Biomed ; 250: 108178, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38652995

RESUMO

BACKGROUND AND OBJECTIVE: Gland segmentation of pathological images is an essential but challenging step for adenocarcinoma diagnosis. Although deep learning methods have recently made tremendous progress in gland segmentation, they have not given satisfactory boundary and region segmentation results of adjacent glands. These glands usually have a large difference in glandular appearance, and the statistical distribution between the training and test sets in deep learning is inconsistent. These problems make networks not generalize well in the test dataset, bringing difficulties to gland segmentation and early cancer diagnosis. METHODS: To address these problems, we propose a Variational Energy Network named VENet with a traditional variational energy Lv loss for gland segmentation of pathological images and early gastric cancer detection in whole slide images (WSIs). It effectively integrates the variational mathematical model and the data-adaptability of deep learning methods to balance boundary and region segmentation. Furthermore, it can effectively segment and classify glands in large-size WSIs with reliable nucleus width and nucleus-to-cytoplasm ratio features. RESULTS: The VENet was evaluated on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset, the Colorectal Adenocarcinoma Glands (CRAG) dataset, and the self-collected Nanfang Hospital dataset. Compared with state-of-the-art methods, our method achieved excellent performance for GlaS Test A (object dice 0.9562, object F1 0.9271, object Hausdorff distance 73.13), GlaS Test B (object dice 94.95, object F1 95.60, object Hausdorff distance 59.63), and CRAG (object dice 95.08, object F1 92.94, object Hausdorff distance 28.01). For the Nanfang Hospital dataset, our method achieved a kappa of 0.78, an accuracy of 0.9, a sensitivity of 0.98, and a specificity of 0.80 on the classification task of test 69 WSIs. CONCLUSIONS: The experimental results show that the proposed model accurately predicts boundaries and outperforms state-of-the-art methods. It can be applied to the early diagnosis of gastric cancer by detecting regions of high-grade gastric intraepithelial neoplasia in WSI, which can assist pathologists in analyzing large WSI and making accurate diagnostic decisions.


Assuntos
Aprendizado Profundo , Detecção Precoce de Câncer , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/diagnóstico por imagem , Neoplasias Gástricas/patologia , Detecção Precoce de Câncer/métodos , Adenocarcinoma/diagnóstico por imagem , Adenocarcinoma/patologia , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Interpretação de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA