Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
J Imaging Inform Med ; 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38980627

RESUMO

Accurate image classification and retrieval are of importance for clinical diagnosis and treatment decision-making. The recent contrastive language-image pre-training (CLIP) model has shown remarkable proficiency in understanding natural images. Drawing inspiration from CLIP, pathology-dedicated CLIP (PathCLIP) has been developed, utilizing over 200,000 image and text pairs in training. While the performance the PathCLIP is impressive, its robustness under a wide range of image corruptions remains unknown. Therefore, we conduct an extensive evaluation to analyze the performance of PathCLIP on various corrupted images from the datasets of osteosarcoma and WSSS4LUAD. In our experiments, we introduce eleven corruption types including brightness, contrast, defocus, resolution, saturation, hue, markup, deformation, incompleteness, rotation, and flipping at various settings. Through experiments, we find that PathCLIP surpasses OpenAI-CLIP and the pathology language-image pre-training (PLIP) model in zero-shot classification. It is relatively robust to image corruptions including contrast, saturation, incompleteness, and orientation factors. Among the eleven corruptions, hue, markup, deformation, defocus, and resolution can cause relatively severe performance fluctuation of the PathCLIP. This indicates that ensuring the quality of images is crucial before conducting a clinical test. Additionally, we assess the robustness of PathCLIP in the task of image-to-image retrieval, revealing that PathCLIP performs less effectively than PLIP on osteosarcoma but performs better on WSSS4LUAD under diverse corruptions. Overall, PathCLIP presents impressive zero-shot classification and retrieval performance for pathology images, but appropriate care needs to be taken when using it.

2.
Eur Radiol ; 34(3): 2084-2092, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37658141

RESUMO

OBJECTIVES: To develop a deep learning-based method for contrast-enhanced breast lesion detection in ultrafast screening MRI. MATERIALS AND METHODS: A total of 837 breast MRI exams of 488 consecutive patients were included. Lesion's location was independently annotated in the maximum intensity projection (MIP) image of the last time-resolved angiography with stochastic trajectories (TWIST) sequence for each individual breast, resulting in 265 lesions (190 benign, 75 malignant) in 163 breasts (133 women). YOLOv5 models were fine-tuned using training sets containing the same number of MIP images with and without lesions. A long short-term memory (LSTM) network was employed to help reduce false positive predictions. The integrated system was then evaluated on test sets containing enriched uninvolved breasts during cross-validation to mimic the performance in a screening scenario. RESULTS: In five-fold cross-validation, the YOLOv5x model showed a sensitivity of 0.95, 0.97, 0.98, and 0.99, with 0.125, 0.25, 0.5, and 1 false positive per breast, respectively. The LSTM network reduced 15.5% of the false positive prediction from the YOLO model, and the positive predictive value was increased from 0.22 to 0.25. CONCLUSIONS: A fine-tuned YOLOv5x model can detect breast lesions on ultrafast MRI with high sensitivity in a screening population, and the output of the model could be further refined by an LSTM network to reduce the amount of false positive predictions. CLINICAL RELEVANCE STATEMENT: The proposed integrated system would make the ultrafast MRI screening process more effective by assisting radiologists in prioritizing suspicious examinations and supporting the diagnostic workup. KEY POINTS: • Deep convolutional neural networks could be utilized to automatically pinpoint breast lesions in screening MRI with high sensitivity. • False positive predictions significantly increased when the detection models were tested on highly unbalanced test sets with more normal scans. • Dynamic enhancement patterns of breast lesions during contrast inflow learned by the long short-term memory networks helped to reduce false positive predictions.


Assuntos
Neoplasias da Mama , Meios de Contraste , Feminino , Humanos , Meios de Contraste/farmacologia , Mama/patologia , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Tempo , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia
3.
Eur Radiol ; 32(12): 8706-8715, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35614363

RESUMO

OBJECTIVES: To investigate the feasibility of automatically identifying normal scans in ultrafast breast MRI with artificial intelligence (AI) to increase efficiency and reduce workload. METHODS: In this retrospective analysis, 837 breast MRI examinations performed on 438 women from April 2016 to October 2019 were included. The left and right breasts in each examination were labelled normal (without suspicious lesions) or abnormal (with suspicious lesions) based on final interpretation. Maximum intensity projection (MIP) images of each breast were then used to train a deep learning model. A high sensitivity threshold was calculated based on the detection trade - off (DET) curve on the validation set. The performance of the model was evaluated by receiver operating characteristic analysis of the independent test set. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) with the high sensitivity threshold were calculated. RESULTS: The independent test set consisted of 178 examinations of 149 patients (mean age, 44 years ± 14 [standard deviation]). The trained model achieved an AUC of 0.81 (95% CI: 0.75-0.88) on the independent test set. Applying a threshold of 0.25 yielded a sensitivity of 98% (95% CI: 90%; 100%), an NPV of 98% (95% CI: 89%; 100%), a workload reduction of 15.7%, and a scan time reduction of 16.6%. CONCLUSION: This deep learning model has a high potential to help identify normal scans in ultrafast breast MRI and thereby reduce radiologists' workload and scan time. KEY POINTS: • Deep learning in TWIST may eliminate the necessity of additional sequences for identifying normal breasts during MRI screening. • Workload and scanning time reductions of 15.7% and 16.6%, respectively, could be achieved with the cost of 1 (1 of 55) false negative prediction.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Humanos , Feminino , Adulto , Inteligência Artificial , Estudos Retrospectivos , Mama/diagnóstico por imagem , Mama/patologia , Imageamento por Ressonância Magnética/métodos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia
4.
Cancers (Basel) ; 14(8)2022 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-35454949

RESUMO

PURPOSE: To investigate the feasibility of using deep learning methods to differentiate benign from malignant breast lesions in ultrafast MRI with both temporal and spatial information. METHODS: A total of 173 single breasts of 122 women (151 examinations) with lesions above 5 mm were retrospectively included. A total of 109 out of 173 lesions were benign. Maximum intensity projection (MIP) images were generated from each of the 14 contrast-enhanced T1-weighted acquisitions in the ultrafast MRI scan. A 2D convolutional neural network (CNN) and a long short-term memory (LSTM) network were employed to extract morphological and temporal features, respectively. The 2D CNN model was trained with the MIPs from the last four acquisitions to ensure the visibility of the lesions, while the LSTM model took MIPs of an entire scan as input. The performance of each model and their combination were evaluated with 100-times repeated stratified four-fold cross-validation. Those models were then compared with models developed with standard DCE-MRI which followed the same data split. RESULTS: In the differentiation between benign and malignant lesions, the ultrafast MRI-based 2D CNN achieved a mean AUC of 0.81 ± 0.06, and the LSTM network achieved a mean AUC of 0.78 ± 0.07; their combination showed a mean AUC of 0.83 ± 0.06 in the cross-validation. The mean AUC values were significantly higher for ultrafast MRI-based models than standard DCE-MRI-based models. CONCLUSION: Deep learning models developed with ultrafast breast MRI achieved higher performances than standard DCE-MRI for malignancy discrimination. The improved AUC values of the combined models indicate an added value of temporal information extracted by the LSTM model in breast lesion characterization.

5.
Med Phys ; 48(2): 733-744, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33300162

RESUMO

PURPOSE: Early detection of lung cancer is of importance since it can increase patients' chances of survival. To detect nodules accurately during screening, radiologists would commonly take the axial, coronal, and sagittal planes into account, rather than solely the axial plane in clinical evaluation. Inspired by clinical work, the paper aims to develop an accurate deep learning framework for nodule detection by a combination of multiple planes. METHODS: The nodule detection system is designed in two stages, multiplanar nodule candidate detection, multiscale false positive (FP) reduction. At the first stage, a deeply supervised encoder-decoder network is trained by axial, coronal, and sagittal slices for the candidate detection task. All possible nodule candidates from the three different planes are merged. To further refine results, a three-dimensional multiscale dense convolutional neural network that extracts multiscale contextual information is applied to remove non-nodules. In the public LIDC-IDRI dataset, 888 computed tomography scans with 1186 nodules accepted by at least three of four radiologists are selected to train and evaluate our proposed system via a tenfold cross-validation scheme. The free-response receiver operating characteristic curve is used for performance assessment. RESULTS: The proposed system achieves a sensitivity of 94.2% with 1.0 FP/scan and a sensitivity of 96.0% with 2.0 FPs/scan. Although it is difficult to detect small nodules (i.e., <6 mm), our designed CAD system reaches a sensitivity of 93.4% (95.0%) of these small nodules at an overall FP rate of 1.0 (2.0) FPs/scan. At the nodule candidate detection stage, results show that the system with a multiplanar method is capable to detect more nodules compared to using a single plane. CONCLUSION: Our approach achieves good performance not only for small nodules but also for large lesions on this dataset. This demonstrates the effectiveness of our developed CAD system for lung nodule detection.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
6.
Chemosphere ; 259: 127445, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32593005

RESUMO

Iron oxide nanoparticles (nFe2O3)-filled materials have been widely employed in various products and their effects on plants have attracted considerable attention because of their potential release into the environment. Currently, numerous studies reporting the influences of iron-bearing nanoparticles on plants are focused on root or seed exposure. However, plants exposed to atmospheric iron-bearing nanoparticles through the leaves and their impacts on plants are still not well understood. This study focused on the uptake, translocation, and effects of foliar exposure of nFe2O3 on wheat seedlings. Wheat seedlings were foliar applied to various concentrations of nFe2O3 (0, 60 and 180 µg per plant) for 1, 7, 14 or 21 d. Our results demonstrated that after exposure for 21 d, the concentrations of Fe in leaves, stems, and roots were 1100, 280 and 160 µg kg-1, respectively. Scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS), as well as the backscattered electron (BSE) images, revealed the stomatal opening was likely the pathway for nFe2O3 uptake. Analysis of the transfer rate, translocation of Fe from leaves to stems and roots, suggested the involvement of plant Fe regulation processes. Particularly, the antioxidant enzymatic activities and malondialdehyde levels in leaves were modified, which was ascribed to the excessive hydroxyl radical (OH) generated via the Fenton-like reaction mediated by nFe2O3. Finally, the OH facilitated the degradation of chlorophyll, posting a negative impact on the photosynthesis, and thus inhibited the biomass production. These findings are meaningful to understand the fate and physiological effects of atmospheric nFe2O3 in crops.


Assuntos
Compostos Férricos/toxicidade , Nanopartículas/toxicidade , Fotossíntese/efeitos dos fármacos , Triticum/efeitos dos fármacos , Antioxidantes/metabolismo , Transporte Biológico , Biomassa , Clorofila/metabolismo , Compostos Férricos/metabolismo , Ferro/metabolismo , Folhas de Planta/metabolismo , Raízes de Plantas/metabolismo , Plântula/efeitos dos fármacos , Sementes/metabolismo , Triticum/metabolismo , Triticum/fisiologia
7.
Laryngoscope ; 130(11): E686-E693, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32068890

RESUMO

OBJECTIVES/HYPOTHESIS: To develop a deep-learning-based computer-aided diagnosis system for distinguishing laryngeal neoplasms (benign, precancerous lesions, and cancer) and improve the clinician-based accuracy of diagnostic assessments of laryngoscopy findings. STUDY DESIGN: Retrospective study. METHODS: A total of 24,667 laryngoscopy images (normal, vocal nodule, polyps, leukoplakia and malignancy) were collected to develop and test a convolutional neural network (CNN)-based classifier. A comparison between the proposed CNN-based classifier and the clinical visual assessments (CVAs) by 12 otolaryngologists was conducted. RESULTS: In the independent testing dataset, an overall accuracy of 96.24% was achieved; for leukoplakia, benign, malignancy, normal, and vocal nodule, the sensitivity and specificity were 92.8% vs. 98.9%, 97% vs. 99.7%, 89% vs. 99.3%, 99.0% vs. 99.4%, and 97.2% vs. 99.1%, respectively. Furthermore, when compared with CVAs on the randomly selected test dataset, the CNN-based classifier outperformed physicians for most laryngeal conditions, with striking improvements in the ability to distinguish nodules (98% vs. 45%, P < .001), polyps (91% vs. 86%, P < .001), leukoplakia (91% vs. 65%, P < .001), and malignancy (90% vs. 54%, P < .001). CONCLUSIONS: The CNN-based classifier can provide a valuable reference for the diagnosis of laryngeal neoplasms during laryngoscopy, especially for distinguishing benign, precancerous, and cancer lesions. LEVEL OF EVIDENCE: NA Laryngoscope, 130:E686-E693, 2020.


Assuntos
Aprendizado Profundo/estatística & dados numéricos , Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Neoplasias Laríngeas/diagnóstico por imagem , Laringoscopia/estatística & dados numéricos , Otorrinolaringologistas/estatística & dados numéricos , Adulto , Feminino , Humanos , Laringoscopia/métodos , Masculino , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA