Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 19(6): e0296985, 2024.
Article in English | MEDLINE | ID: mdl-38889117

ABSTRACT

Deep neural networks have been widely adopted in numerous domains due to their high performance and accessibility to developers and application-specific end-users. Fundamental to image-based applications is the development of Convolutional Neural Networks (CNNs), which possess the ability to automatically extract features from data. However, comprehending these complex models and their learned representations, which typically comprise millions of parameters and numerous layers, remains a challenge for both developers and end-users. This challenge arises due to the absence of interpretable and transparent tools to make sense of black-box models. There exists a growing body of Explainable Artificial Intelligence (XAI) literature, including a collection of methods denoted Class Activation Maps (CAMs), that seek to demystify what representations the model learns from the data, how it informs a given prediction, and why it, at times, performs poorly in certain tasks. We propose a novel XAI visualization method denoted CAManim that seeks to simultaneously broaden and focus end-user understanding of CNN predictions by animating the CAM-based network activation maps through all layers, effectively depicting from end-to-end how a model progressively arrives at the final layer activation. Herein, we demonstrate that CAManim works with any CAM-based method and various CNN architectures. Beyond qualitative model assessments, we additionally propose a novel quantitative assessment that expands upon the Remove and Debias (ROAD) metric, pairing the qualitative end-to-end network visual explanations assessment with our novel quantitative "yellow brick ROAD" assessment (ybROAD). This builds upon prior research to address the increasing demand for interpretable, robust, and transparent model assessment methodology, ultimately improving an end-user's trust in a given model's predictions. Examples and source code can be found at: https://omni-ml.github.io/pytorch-grad-cam-anim/.


Subject(s)
Neural Networks, Computer , Artificial Intelligence , Humans , Algorithms , Deep Learning
2.
Sci Rep ; 14(1): 9013, 2024 04 19.
Article in English | MEDLINE | ID: mdl-38641713

ABSTRACT

Deep learning algorithms have demonstrated remarkable potential in clinical diagnostics, particularly in the field of medical imaging. In this study, we investigated the application of deep learning models in early detection of fetal kidney anomalies. To provide an enhanced interpretation of those models' predictions, we proposed an adapted two-class representation and developed a multi-class model interpretation approach for problems with more than two labels and variable hierarchical grouping of labels. Additionally, we employed the explainable AI (XAI) visualization tools Grad-CAM and HiResCAM, to gain insights into model predictions and identify reasons for misclassifications. The study dataset consisted of 969 ultrasound images from unique patients; 646 control images and 323 cases of kidney anomalies, including 259 cases of unilateral urinary tract dilation and 64 cases of unilateral multicystic dysplastic kidney. The best performing model achieved a cross-validated area under the ROC curve of 91.28% ± 0.52%, with an overall accuracy of 84.03% ± 0.76%, sensitivity of 77.39% ± 1.99%, and specificity of 87.35% ± 1.28%. Our findings emphasize the potential of deep learning models in predicting kidney anomalies from limited prenatal ultrasound imagery. The proposed adaptations in model representation and interpretation represent a novel solution to multi-class prediction problems.


Subject(s)
Deep Learning , Kidney Diseases , Urinary Tract , Pregnancy , Female , Humans , Ultrasonography, Prenatal/methods , Prenatal Diagnosis/methods , Kidney Diseases/diagnostic imaging , Urinary Tract/abnormalities
3.
J Obstet Gynaecol Can ; 46(6): 102435, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38458270

ABSTRACT

OBJECTIVES: To compare surgeon responses regarding their surgical plan before and after receiving a patient-specific three-dimensional (3D)-printed model of a patient's multifibroid uterus created from their magnetic resonance imaging. METHODS: 3D-printed models were derived from standard-of-care pelvic magnetic resonance images of patients scheduled for surgical intervention for multifibroid uterus. Relevant anatomical structures were printed using a combination of transparent and opaque resin types. 3D models were used for 7 surgical cases (5 myomectomies, 2 hysterectomies). A staff surgeon and 1 or 2 surgical fellow(s) were present for each case. Surgeons completed a questionnaire before and after receiving the model documenting surgical approach, perceived difficulty, and confidence in surgical plan. A postoperative questionnaire was used to assess surgeon experience using 3D models. RESULTS: Two staff surgeons and 3 clinical fellows participated in this study. A total of 15 surgeon responses were collected across the 7 cases. After viewing the models, an increase in perceived surgical difficulty and confidence in surgical plan was reported in 12/15 and 7/15 responses, respectively. Anticipated surgical time had a mean ± SD absolute change of 44.0 ± 47.9 minutes and anticipated blood loss had an absolute change of 100 ± 103.5 cc. 2 of 15 responses report a change in pre-surgical approach. Intra-operative model reference was reported to change the dissection route in 8/15 surgeon responses. On average, surgeons rated their experience using 3D models 8.6/10 for pre-surgical planning and 8.1/10 for intra-operative reference. CONCLUSIONS: Patient-specific 3D anatomical models may be a useful tool to increase a surgeon's understanding of complex gynaecologic anatomy and to improve their surgical plan. Future work is needed to evaluate the impact of 3D models on surgical outcomes in gynaecology.


Subject(s)
Magnetic Resonance Imaging , Models, Anatomic , Printing, Three-Dimensional , Uterus , Humans , Female , Uterus/surgery , Uterus/diagnostic imaging , Uterus/anatomy & histology , Uterine Neoplasms/surgery , Uterine Neoplasms/diagnostic imaging , Uterine Myomectomy/methods , Hysterectomy/methods , Leiomyoma/surgery , Leiomyoma/diagnostic imaging , Leiomyoma/pathology , Adult , Surgeons
4.
PLoS One ; 17(6): e0269323, 2022.
Article in English | MEDLINE | ID: mdl-35731736

ABSTRACT

OBJECTIVE: To develop and internally validate a deep-learning algorithm from fetal ultrasound images for the diagnosis of cystic hygromas in the first trimester. METHODS: All first trimester ultrasound scans with a diagnosis of a cystic hygroma between 11 and 14 weeks gestation at our tertiary care centre in Ontario, Canada were studied. Ultrasound scans with normal nuchal translucency were used as controls. The dataset was partitioned with 75% of images used for model training and 25% used for model validation. Images were analyzed using a DenseNet model and the accuracy of the trained model to correctly identify cases of cystic hygroma was assessed by calculating sensitivity, specificity, and the area under the receiver-operating characteristic (ROC) curve. Gradient class activation heat maps (Grad-CAM) were generated to assess model interpretability. RESULTS: The dataset included 289 sagittal fetal ultrasound images;129 cystic hygroma cases and 160 normal NT controls. Overall model accuracy was 93% (95% CI: 88-98%), sensitivity 92% (95% CI: 79-100%), specificity 94% (95% CI: 91-96%), and the area under the ROC curve 0.94 (95% CI: 0.89-1.0). Grad-CAM heat maps demonstrated that the model predictions were driven primarily by the fetal posterior cervical area. CONCLUSIONS: Our findings demonstrate that deep-learning algorithms can achieve high accuracy in diagnostic interpretation of cystic hygroma in the first trimester, validated against expert clinical assessment.


Subject(s)
Deep Learning , Lymphangioma, Cystic , Chromosome Aberrations , Female , Humans , Lymphangioma, Cystic/diagnostic imaging , Ontario , Pregnancy , Pregnancy Trimester, First , Ultrasonography, Prenatal
5.
3D Print Med ; 7(1): 17, 2021 Jul 05.
Article in English | MEDLINE | ID: mdl-34224043

ABSTRACT

BACKGROUND: Patient specific three-dimensional (3D) models can be derived from two-dimensional medical images, such as magnetic resonance (MR) images. 3D models have been shown to improve anatomical comprehension by providing more accurate assessments of anatomical volumes and better perspectives of structural orientations relative to adjacent structures. The clinical benefit of using patient specific 3D printed models have been highlighted in the fields of orthopaedics, cardiothoracics, and neurosurgery for the purpose of pre-surgical planning. However, reports on the clinical use of 3D printed models in the field of gynecology are limited. MAIN TEXT: This article aims to provide a brief overview of the principles of 3D printing and the steps required to derive patient-specific, anatomically accurate 3D printed models of gynecologic anatomy from MR images. Examples of 3D printed models for uterine fibroids and endometriosis are presented as well as a discussion on the barriers to clinical uptake and the future directions for 3D printing in the field of gynecological surgery. CONCLUSION: Successful gynecologic surgery requires a thorough understanding of the patient's anatomy and burden of disease. Future use of patient specific 3D printed models is encouraged so the clinical benefit can be better understood and evidence to support their use in standard of care can be provided.

SELECTION OF CITATIONS
SEARCH DETAIL
...