Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
J Med Imaging (Bellingham) ; 11(3): 034501, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38737493

RESUMEN

Purpose: Current clinical assessment qualitatively describes background parenchymal enhancement (BPE) as minimal, mild, moderate, or marked based on the visually perceived volume and intensity of enhancement in normal fibroglandular breast tissue in dynamic contrast-enhanced (DCE)-MRI. Tumor enhancement may be included within the visual assessment of BPE, thus inflating BPE estimation due to angiogenesis within the tumor. Using a dataset of 426 MRIs, we developed an automated method to segment breasts, electronically remove lesions, and calculate scores to estimate BPE levels. Approach: A U-Net was trained for breast segmentation from DCE-MRI maximum intensity projection (MIP) images. Fuzzy c-means clustering was used to segment lesions; the lesion volume was removed prior to creating projections. U-Net outputs were applied to create projection images of both, affected, and unaffected breasts before and after lesion removal. BPE scores were calculated from various projection images, including MIPs or average intensity projections of first- or second postcontrast subtraction MRIs, to evaluate the effect of varying image parameters on automatic BPE assessment. Receiver operating characteristic analysis was performed to determine the predictive value of computed scores in BPE level classification tasks relative to radiologist ratings. Results: Statistically significant trends were found between radiologist BPE ratings and calculated BPE scores for all breast regions (Kendall correlation, p<0.001). Scores from all breast regions performed significantly better than guessing (p<0.025 from the z-test). Results failed to show a statistically significant difference in performance with and without lesion removal. BPE scores of the affected breast in the second postcontrast subtraction MIP after lesion removal performed statistically greater than random guessing across various viewing projections and DCE time points. Conclusions: Results demonstrate the potential for automatic BPE scoring to serve as a quantitative value for objective BPE level classification from breast DCE-MR without the influence of lesion enhancement.

2.
J Med Imaging (Bellingham) ; 10(6): 064502, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37990686

RESUMEN

Purpose: Given the dependence of radiomic-based computer-aided diagnosis artificial intelligence on accurate lesion segmentation, we assessed the performances of 2D and 3D U-Nets in breast lesion segmentation on dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) relative to fuzzy c-means (FCM) and radiologist segmentations. Approach: Using 994 unique breast lesions imaged with DCE-MRI, three segmentation algorithms (FCM clustering, 2D and 3D U-Net convolutional neural networks) were investigated. Center slice segmentations produced by FCM, 2D U-Net, and 3D U-Net were evaluated using radiologist segmentations as truth, and volumetric segmentations produced by 2D U-Net slices and 3D U-Net were compared using FCM as a surrogate reference standard. Fivefold cross-validation by lesion was conducted on the U-Nets; Dice similarity coefficient (DSC) and Hausdorff distance (HD) served as performance metrics. Segmentation performances were compared across different input image and lesion types. Results: 2D U-Net outperformed 3D U-Net for center slice (DSC, HD p<0.001) and volume segmentations (DSC, HD p<0.001). 2D U-Net outperformed FCM in center slice segmentation (DSC p<0.001). The use of second postcontrast subtraction images showed greater performance than first postcontrast subtraction images using the 2D and 3D U-Net (DSC p<0.05). Additionally, mass segmentation outperformed nonmass segmentation from first and second postcontrast subtraction images using 2D and 3D U-Nets (DSC, HD p<0.001). Conclusions: Results suggest that 2D U-Net is promising in segmenting mass and nonmass enhancing breast lesions from first and second postcontrast subtraction MRIs and thus could be an effective alternative to FCM or 3D U-Net.

3.
J Med Imaging (Bellingham) ; 10(4): 044504, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37608852

RESUMEN

Purpose: Image-based prediction of coronavirus disease 2019 (COVID-19) severity and resource needs can be an important means to address the COVID-19 pandemic. In this study, we propose an artificial intelligence/machine learning (AI/ML) COVID-19 prognosis method to predict patients' needs for intensive care by analyzing chest X-ray radiography (CXR) images using deep learning. Approach: The dataset consisted of 8357 CXR exams from 5046 COVID-19-positive patients as confirmed by reverse transcription polymerase chain reaction (RT-PCR) tests for the SARS-CoV-2 virus with a training/validation/test split of 64%/16%/20% on a by patient level. Our model involved a DenseNet121 network with a sequential transfer learning technique employed to train on a sequence of gradually more specific and complex tasks: (1) fine-tuning a model pretrained on ImageNet using a previously established CXR dataset with a broad spectrum of pathologies; (2) refining on another established dataset to detect pneumonia; and (3) fine-tuning using our in-house training/validation datasets to predict patients' needs for intensive care within 24, 48, 72, and 96 h following the CXR exams. The classification performances were evaluated on our independent test set (CXR exams of 1048 patients) using the area under the receiver operating characteristic curve (AUC) as the figure of merit in the task of distinguishing between those COVID-19-positive patients who required intensive care following the imaging exam and those who did not. Results: Our proposed AI/ML model achieved an AUC (95% confidence interval) of 0.78 (0.74, 0.81) when predicting the need for intensive care 24 h in advance, and at least 0.76 (0.73, 0.80) for 48 h or more in advance using predictions based on the AI prognostic marker derived from CXR images. Conclusions: This AI/ML prediction model for patients' needs for intensive care has the potential to support both clinical decision-making and resource management.

4.
Phys Med Biol ; 68(7)2023 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-36716497

RESUMEN

Objective. Developing Machine Learning models (N Gorre et al 2023) for clinical applications from scratch can be a cumbersome task requiring varying levels of expertise. Seasoned developers and researchers may also often face incompatible frameworks and data preparation issues. This is further complicated in the context of diagnostic radiology and oncology applications, given the heterogenous nature of the input data and the specialized task requirements. Our goal is to provide clinicians, researchers, and early AI developers with a modular, flexible, and user-friendly software tool that can effectively meet their needs to explore, train, and test AI algorithms by allowing users to interpret their model results. This latter step involves the incorporation of interpretability and explainability methods that would allow visualizing performance as well as interpreting predictions across the different neural network layers of a deep learning algorithm.Approach. To demonstrate our proposed tool, we have developed the CRP10 AI Application Interface (CRP10AII) as part of the MIDRC consortium. CRP10AII is based on the web service Django framework in Python. CRP10AII/Django/Python in combination with another data manager tool/platform, data commons such as Gen3 can provide a comprehensive while easy to use machine/deep learning analytics tool. The tool allows to test, visualize, interpret how and why the deep learning model is performing. The major highlight of CRP10AII is its capability of visualization and interpretability of otherwise Blackbox AI algorithms.Results. CRP10AII provides many convenient features for model building and evaluation, including: (1) query and acquire data according to the specific application (e.g. classification, segmentation) from the data common platform (Gen3 here); (2) train the AI models from scratch or use pre-trained models (e.g. VGGNet, AlexNet, BERT) for transfer learning and test the model predictions, performance assessment, receiver operating characteristics curve evaluation; (3) interpret the AI model predictions using methods like SHAPLEY, LIME values; and (4) visualize the model learning through heatmaps and activation maps of individual layers of the neural network.Significance. Unexperienced users may have more time to swiftly pre-process, build/train their AI models on their own use-cases, and further visualize and explore these AI models as part of this pipeline, all in an end-to-end manner. CRP10AII will be provided as an open-source tool, and we expect to continue developing it based on users' feedback.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Programas Informáticos , Aprendizaje Automático , Curva ROC
5.
Sci Rep ; 13(1): 1187, 2023 01 21.
Artículo en Inglés | MEDLINE | ID: mdl-36681685

RESUMEN

In addition to lung cancer, other thoracic abnormalities, such as emphysema, can be visualized within low-dose CT scans that were initially obtained in cancer screening programs, and thus, opportunistic evaluation of these diseases may be highly valuable. However, manual assessment for each scan is tedious and often subjective, thus we have developed an automatic, rapid computer-aided diagnosis system for emphysema using attention-based multiple instance deep learning and 865 LDCTs. In the task of determining if a CT scan presented with emphysema or not, our novel Transfer AMIL approach yielded an area under the ROC curve of 0.94 ± 0.04, which was a statistically significant improvement compared to other methods evaluated in our study following the Delong Test with correction for multiple comparisons. Further, from our novel attention weight curves, we found that the upper lung demonstrated a stronger influence in all scan classes, indicating that the model prioritized upper lobe information. Overall, our novel Transfer AMIL method yielded high performance and provided interpretable information by identifying slices that were most influential to the classification decision, thus demonstrating strong potential for clinical implementation.


Asunto(s)
Aprendizaje Profundo , Enfisema , Enfisema Pulmonar , Humanos , Enfisema Pulmonar/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Enfisema/diagnóstico por imagen
7.
Neurocrit Care ; 36(3): 974-982, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34873672

RESUMEN

BACKGROUND: Establishing whether a patient who survived a cardiac arrest has suffered hypoxic-ischemic brain injury (HIBI) shortly after return of spontaneous circulation (ROSC) can be of paramount importance for informing families and identifying patients who may benefit the most from neuroprotective therapies. We hypothesize that using deep transfer learning on normal-appearing findings on head computed tomography (HCT) scans performed after ROSC would allow us to identify early evidence of HIBI. METHODS: We analyzed 54 adult comatose survivors of cardiac arrest for whom both an initial HCT scan, done early after ROSC, and a follow-up HCT scan were available. The initial HCT scan of each included patient was read as normal by a board-certified neuroradiologist. Deep transfer learning was used to evaluate the initial HCT scan and predict progression of HIBI on the follow-up HCT scan. A naive set of 16 additional patients were used for external validation of the model. RESULTS: The median age (interquartile range) of our cohort was 61 (16) years, and 25 (46%) patients were female. Although findings of all initial HCT scans appeared normal, follow-up HCT scans showed signs of HIBI in 29 (54%) patients (computed tomography progression). Evaluating the first HCT scan with deep transfer learning accurately predicted progression to HIBI. The deep learning score was the most significant predictor of progression (area under the receiver operating characteristic curve = 0.96 [95% confidence interval 0.91-1.00]), with a deep learning score of 0.494 having a sensitivity of 1.00, specificity of 0.88, accuracy of 0.94, and positive predictive value of 0.91. An additional assessment of an independent test set confirmed high performance (area under the receiver operating characteristic curve = 0.90 [95% confidence interval 0.74-1.00]). CONCLUSIONS: Deep transfer learning used to evaluate normal-appearing findings on HCT scans obtained early after ROSC in comatose survivors of cardiac arrest accurately identifies patients who progress to show radiographic evidence of HIBI on follow-up HCT scans.


Asunto(s)
Lesiones Encefálicas , Hipoxia-Isquemia Encefálica , Paro Cardíaco Extrahospitalario , Adulto , Coma/diagnóstico por imagen , Coma/etiología , Femenino , Humanos , Hipoxia-Isquemia Encefálica/diagnóstico por imagen , Hipoxia-Isquemia Encefálica/etiología , Aprendizaje Automático , Masculino , Persona de Mediana Edad , Paro Cardíaco Extrahospitalario/terapia , Estudios Retrospectivos
8.
Med Phys ; 49(1): 1-14, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34796530

RESUMEN

The development of medical imaging artificial intelligence (AI) systems for evaluating COVID-19 patients has demonstrated potential for improving clinical decision making and assessing patient outcomes during the recent COVID-19 pandemic. These have been applied to many medical imaging tasks, including disease diagnosis and patient prognosis, as well as augmented other clinical measurements to better inform treatment decisions. Because these systems are used in life-or-death decisions, clinical implementation relies on user trust in the AI output. This has caused many developers to utilize explainability techniques in an attempt to help a user understand when an AI algorithm is likely to succeed as well as which cases may be problematic for automatic assessment, thus increasing the potential for rapid clinical translation. AI application to COVID-19 has been marred with controversy recently. This review discusses several aspects of explainable and interpretable AI as it pertains to the evaluation of COVID-19 disease and it can restore trust in AI application to this disease. This includes the identification of common tasks that are relevant to explainable medical imaging AI, an overview of several modern approaches for producing explainable output as appropriate for a given imaging scenario, a discussion of how to evaluate explainable AI, and recommendations for best practices in explainable/interpretable AI implementation. This review will allow developers of AI systems for COVID-19 to quickly understand the basics of several explainable AI techniques and assist in the selection of an approach that is both appropriate and effective for a given scenario.


Asunto(s)
Inteligencia Artificial , COVID-19 , Diagnóstico por Imagen , Humanos , Pandemias , SARS-CoV-2
9.
J Med Imaging (Bellingham) ; 8(Suppl 1): 010902-10902, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34646912

RESUMEN

The coronavirus disease 2019 (COVID-19) pandemic has wreaked havoc across the world. It also created a need for the urgent development of efficacious predictive diagnostics, specifically, artificial intelligence (AI) methods applied to medical imaging. This has led to the convergence of experts from multiple disciplines to solve this global pandemic including clinicians, medical physicists, imaging scientists, computer scientists, and informatics experts to bring to bear the best of these fields for solving the challenges of the COVID-19 pandemic. However, such a convergence over a very brief period of time has had unintended consequences and created its own challenges. As part of Medical Imaging Data and Resource Center initiative, we discuss the lessons learned from career transitions across the three involved disciplines (radiology, medical imaging physics, and computer science) and draw recommendations based on these experiences by analyzing the challenges associated with each of the three associated transition types: (1) AI of non-imaging data to AI of medical imaging data, (2) medical imaging clinician to AI of medical imaging, and (3) AI of medical imaging to AI of COVID-19 imaging. The lessons learned from these career transitions and the diffusion of knowledge among them could be accomplished more effectively by recognizing their associated intricacies. These lessons learned in the transitioning to AI in the medical imaging of COVID-19 can inform and enhance future AI applications, making the whole of the transitions more than the sum of each discipline, for confronting an emergency like the COVID-19 pandemic or solving emerging problems in biomedicine.

10.
J Med Imaging (Bellingham) ; 8(Suppl 1): 014501, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33415179

RESUMEN

Purpose: Given the recent COVID-19 pandemic and its stress on global medical resources, presented here is the development of a machine intelligent method for thoracic computed tomography (CT) to inform management of patients on steroid treatment. Approach: Transfer learning has demonstrated strong performance when applied to medical imaging, particularly when only limited data are available. A cascaded transfer learning approach extracted quantitative features from thoracic CT sections using a fine-tuned VGG19 network. The extracted slice features were axially pooled to provide a CT-scan-level representation of thoracic characteristics and a support vector machine was trained to distinguish between patients who required steroid administration and those who did not, with performance evaluated through receiver operating characteristic (ROC) curve analysis. Least-squares fitting was used to assess temporal trends using the transfer learning approach, providing a preliminary method for monitoring disease progression. Results: In the task of identifying patients who should receive steroid treatments, this approach yielded an area under the ROC curve of 0.85 ± 0.10 and demonstrated significant separation between patients who received steroids and those who did not. Furthermore, temporal trend analysis of the prediction score matched expected progression during hospitalization for both groups, with separation at early timepoints prior to convergence near the end of the duration of hospitalization. Conclusions: The proposed cascade deep learning method has strong clinical potential for informing clinical decision-making and monitoring patient treatment.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...