Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 9380, 2024 04 23.
Artículo en Inglés | MEDLINE | ID: mdl-38654066

RESUMEN

Vision transformers (ViTs) have revolutionized computer vision by employing self-attention instead of convolutional neural networks and demonstrated success due to their ability to capture global dependencies and remove spatial biases of locality. In medical imaging, where input data may differ in size and resolution, existing architectures require resampling or resizing during pre-processing, leading to potential spatial resolution loss and information degradation. This study proposes a co-ordinate-based embedding that encodes the geometry of medical images, capturing physical co-ordinate and resolution information without the need for resampling or resizing. The effectiveness of the proposed embedding is demonstrated through experiments with UNETR and SwinUNETR models for infarct segmentation on MRI dataset with AxTrace and AxADC contrasts. The dataset consists of 1142 training, 133 validation and 143 test subjects. Both models with the addition of co-ordinate based positional embedding achieved substantial improvements in mean Dice score by 6.5% and 7.6%. The proposed embedding showcased a statistically significant advantage p-value< 0.0001 over alternative approaches. In conclusion, the proposed co-ordinate-based pixel-wise positional embedding method offers a promising solution for Transformer-based models in medical image analysis. It effectively leverages physical co-ordinate information to enhance performance without compromising spatial resolution and provides a foundation for future advancements in positional embedding techniques for medical applications.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Redes Neurales de la Computación
2.
Acad Radiol ; 2024 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-38908922

RESUMEN

RATIONALE AND OBJECTIVES: To assess a deep learning application (DLA) for acute ischemic stroke (AIS) detection on brain magnetic resonance imaging (MRI) in the emergency room (ER) and the effect of T2-weighted imaging (T2WI) on its performance. MATERIALS AND METHODS: We retrospectively analyzed brain MRIs taken through the ER from March to October 2021 that included diffusion-weighted imaging (DWI) and fluid-attenuated inversion recovery (FLAIR) sequences. MRIs were processed by the DLA, and sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUROC) were evaluated, with three neuroradiologists establishing the gold standard for detection performance. In addition, we examined the impact of axial T2WI, when available, on the accuracy and processing time of DLA. RESULTS: The study included 947 individuals (mean age ± standard deviation, 64 years ± 16; 461 men, 486 women), with 239 (25%) positive for AIS. The overall performance of DLA was as follows: sensitivity, 90%; specificity, 89%; accuracy, 89%; and AUROC, 0.95. The average processing time was 24 s. In the subgroup with T2WI, T2WI did not significantly impact MRI assessments but did result in longer processing times (35 s without T2WI compared to 48 s with T2WI, p < 0.001). CONCLUSION: The DLA successfully identified AIS in the ER setting with an average processing time of 24 s. The absence of performance acquire with axial T2WI suggests that the DLA can diagnose AIS with just axial DWI and FLAIR sequences, potentially shortening the exam duration in the ER.

3.
Artículo en Inglés | MEDLINE | ID: mdl-39059508

RESUMEN

PURPOSE: The purpose of this study was to investigate an extended self-adapting nnU-Net framework for detecting and segmenting brain metastases (BM) on magnetic resonance imaging (MRI). METHODS AND MATERIALS: Six different nnU-Net systems with adaptive data sampling, adaptive Dice loss, or different patch/batch sizes were trained and tested for detecting and segmenting intraparenchymal BM with a size ≥2 mm on 3 Dimensional (3D) post-Gd T1-weighted MRI volumes using 2092 patients from 7 institutions (1712, 195, and 185 patients for training, validation, and testing, respectively). Gross tumor volumes of BM delineated by physicians for stereotactic radiosurgery were collected retrospectively and curated at each institute. Additional centralized data curation was carried out to create gross tumor volumes of uncontoured BM by 2 radiologists to improve the accuracy of ground truth. The training data set was augmented with synthetic BMs of 1025 MRI volumes using a 3D generative pipeline. BM detection was evaluated by lesion-level sensitivity and false-positive (FP) rate. BM segmentation was assessed by lesion-level Dice similarity coefficient, 95-percentile Hausdorff distance, and average Hausdorff distance (HD). The performances were assessed across different BM sizes. Additional testing was performed using a second data set of 206 patients. RESULTS: Of the 6 nnU-Net systems, the nnU-Net with adaptive Dice loss achieved the best detection and segmentation performance on the first testing data set. At an FP rate of 0.65 ± 1.17, overall sensitivity was 0.904 for all sizes of BM, 0.966 for BM ≥0.1 cm3, and 0.824 for BM <0.1 cm3. Mean values of Dice similarity coefficient, 95-percentile Hausdorff distance, and average HD of all detected BMs were 0.758, 1.45, and 0.23 mm, respectively. Performances on the second testing data set achieved a sensitivity of 0.907 at an FP rate of 0.57 ± 0.85 for all BM sizes, and an average HD of 0.33 mm for all detected BM. CONCLUSIONS: Our proposed extension of the self-configuring nnU-Net framework substantially improved small BM detection sensitivity while maintaining a controlled FP rate. Clinical utility of the extended nnU-Net model for assisting early BM detection and stereotactic radiosurgery planning will be investigated.

4.
Radiology ; 263(3): 856-64, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-22474671

RESUMEN

PURPOSE: To develop and evaluate a technique for the registration of in vivo prostate magnetic resonance (MR) images to digital histopathologic images by using image-guided specimen slicing based on strand-shaped fiducial markers relating specimen imaging to histopathologic examination. MATERIALS AND METHODS: The study was approved by the institutional review board (the University of Western Ontario Health Sciences Research Ethics Board, London, Ontario, Canada), and written informed consent was obtained from all patients. This work proposed and evaluated a technique utilizing developed fiducial markers and real-time three-dimensional visualization in support of image guidance for ex vivo prostate specimen slicing parallel to the MR imaging planes prior to digitization, simplifying the registration process. Means, standard deviations, root-mean-square errors, and 95% confidence intervals are reported for all evaluated measurements. RESULTS: The slicing error was within the 2.2 mm thickness of the diagnostic-quality MR imaging sections, with a tissue block thickness standard deviation of 0.2 mm. Rigid registration provided negligible postregistration overlap of the smallest clinically important tumors (0.2 cm(3)) at histologic examination and MR imaging, whereas the tested nonrigid registration method yielded a mean target registration error of 1.1 mm and provided useful coregistration of such tumors. CONCLUSION: This method for the registration of prostate digital histopathologic images to in vivo MR images acquired by using an endorectal receive coil was sufficiently accurate for coregistering the smallest clinically important lesions with 95% confidence.


Asunto(s)
Imagen por Resonancia Magnética/instrumentación , Próstata/patología , Neoplasias de la Próstata/patología , Medios de Contraste , Marcadores Fiduciales , Gadolinio DTPA , Humanos , Interpretación de Imagen Asistida por Computador , Imagenología Tridimensional/instrumentación , Imagen por Resonancia Magnética Intervencional , Masculino , Próstata/cirugía , Prostatectomía , Neoplasias de la Próstata/cirugía
5.
J Magn Reson Imaging ; 36(6): 1402-12, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22851455

RESUMEN

PURPOSE: To present and evaluate a method for registration of whole-mount prostate digital histology images to ex vivo magnetic resonance (MR) images. MATERIALS AND METHODS: Nine radical prostatectomy specimens were marked with 10 strand-shaped fiducial markers per specimen, imaged with T1- and T2-weighted 3T MRI protocols, sliced at 4.4-mm intervals, processed for whole-mount histology, and the resulting histological sections (3-5 per specimen, 34 in total) were digitized. The correspondence between fiducial markers on histology and MR images yielded an initial registration, which was refined by a local optimization technique, yielding the least-squares best-fit affine transformation between corresponding fiducial points on histology and MR images. Accuracy was quantified as the postregistration 3D distance between landmarks (3-7 per section, 184 in total) on histology and MR images, and compared to a previous state-of-the-art registration method. RESULTS: The proposed method and previous method had mean (SD) target registration errors of 0.71 (0.38) mm and 1.21 (0.74) mm, respectively, requiring 3 and 11 hours of processing time, respectively. CONCLUSION: The proposed method registers digital histology to prostate MR images, yielding 70% reduced processing time and mean accuracy sufficient to achieve 85% overlap on histology and ex vivo MR images for a 0.2 cc spherical tumor.


Asunto(s)
Biopsia/instrumentación , Marcadores Fiduciales , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/instrumentación , Reconocimiento de Normas Patrones Automatizadas/métodos , Próstata/patología , Técnica de Sustracción , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Biopsia/métodos , Diseño de Equipo , Análisis de Falla de Equipo , Humanos , Aumento de la Imagen/métodos , Imagen por Resonancia Magnética/métodos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Adulto Joven
6.
Radiol Artif Intell ; 4(3): e210115, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35652116

RESUMEN

Purpose: To present a method that automatically detects, subtypes, and locates acute or subacute intracranial hemorrhage (ICH) on noncontrast CT (NCCT) head scans; generates detection confidence scores to identify high-confidence data subsets with higher accuracy; and improves radiology worklist prioritization. Such scores may enable clinicians to better use artificial intelligence (AI) tools. Materials and Methods: This retrospective study included 46 057 studies from seven "internal" centers for development (training, architecture selection, hyperparameter tuning, and operating-point calibration; n = 25 946) and evaluation (n = 2947) and three "external" centers for calibration (n = 400) and evaluation (n = 16 764). Internal centers contributed developmental data, whereas external centers did not. Deep neural networks predicted the presence of ICH and subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and/or epidural hemorrhage) and segmentations per case. Two ICH confidence scores are discussed: a calibrated classifier entropy score and a Dempster-Shafer score. Evaluation was completed by using receiver operating characteristic curve analysis and report turnaround time (RTAT) modeling on the evaluation set and on confidence score-defined subsets using bootstrapping. Results: The areas under the receiver operating characteristic curve for ICH were 0.97 (0.97, 0.98) and 0.95 (0.94, 0.95) on internal and external center data, respectively. On 80% of the data stratified by calibrated classifier and Dempster-Shafer scores, the system improved the Youden indexes, increasing them from 0.84 to 0.93 (calibrated classifier) and from 0.84 to 0.92 (Dempster-Shafer) for internal centers and increasing them from 0.78 to 0.88 (calibrated classifier) and from 0.78 to 0.89 (Dempster-Shafer) for external centers (P < .001). Models estimated shorter RTAT for AI-prioritized worklists with confidence measures than for AI-prioritized worklists without confidence measures, shortening RTAT by 27% (calibrated classifier) and 27% (Dempster-Shafer) for internal centers and shortening RTAT by 25% (calibrated classifier) and 27% (Dempster-Shafer) for external centers (P < .001). Conclusion: AI that provided statistical confidence measures for ICH detection on NCCT scans reliably detected and subtyped hemorrhages, identified high-confidence predictions, and improved worklist prioritization in simulation.Keywords: CT, Head/Neck, Hemorrhage, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022.

7.
IEEE Trans Med Imaging ; 40(1): 335-345, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32966215

RESUMEN

Detecting malignant pulmonary nodules at an early stage can allow medical interventions which may increase the survival rate of lung cancer patients. Using computer vision techniques to detect nodules can improve the sensitivity and the speed of interpreting chest CT for lung cancer screening. Many studies have used CNNs to detect nodule candidates. Though such approaches have been shown to outperform the conventional image processing based methods regarding the detection accuracy, CNNs are also known to be limited to generalize on under-represented samples in the training set and prone to imperceptible noise perturbations. Such limitations can not be easily addressed by scaling up the dataset or the models. In this work, we propose to add adversarial synthetic nodules and adversarial attack samples to the training data to improve the generalization and the robustness of the lung nodule detection systems. To generate hard examples of nodules from a differentiable nodule synthesizer, we use projected gradient descent (PGD) to search the latent code within a bounded neighbourhood that would generate nodules to decrease the detector response. To make the network more robust to unanticipated noise perturbations, we use PGD to search for noise patterns that can trigger the network to give over-confident mistakes. By evaluating on two different benchmark datasets containing consensus annotations from three radiologists, we show that the proposed techniques can improve the detection performance on real CT data. To understand the limitations of both the conventional networks and the proposed augmented networks, we also perform stress-tests on the false positive reduction networks by feeding different types of artificially produced patches. We show that the augmented networks are more robust to both under-represented nodules as well as resistant to noise perturbations.


Asunto(s)
Neoplasias Pulmonares , Nódulo Pulmonar Solitario , Detección Precoz del Cáncer , Humanos , Procesamiento de Imagen Asistido por Computador , Pulmón , Neoplasias Pulmonares/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador , Nódulo Pulmonar Solitario/diagnóstico por imagen , Tomografía Computarizada por Rayos X
8.
J Med Imaging (Bellingham) ; 8(3): 037001, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-34041305

RESUMEN

Purpose: We investigate the impact of various deep-learning-based methods for detecting and segmenting metastases with different lesion volume sizes on 3D brain MR images. Approach: A 2.5D U-Net and a 3D U-Net were selected. We also evaluated weak learner fusion of the prediction features generated by the 2.5D and the 3D networks. A 3D fully convolutional one-stage (FCOS) detector was selected as a representative of bounding-box regression-based detection methods. A total of 422 3D post-contrast T1-weighted scans from patients with brain metastases were used. Performances were analyzed based on lesion volume, total metastatic volume per patient, and number of lesions per patient. Results: The performance of detection of the 2.5D and 3D U-Net methods had recall of > 0.83 and precision of > 0.44 for lesion volume > 0.3 cm 3 but deteriorated as metastasis size decreased below 0.3 cm 3 to 0.58 to 0.74 in recall and 0.16 to 0.25 in precision. Compared the two U-Nets for detection capability, high precision was achieved by the 2.5D network, but high recall was achieved by the 3D network for all lesion sizes. The weak learner fusion achieved a balanced performance between the 2.5D and 3D U-Nets; particularly, it increased precision to 0.83 for lesion volumes of 0.1 to 0.3 cm 3 but decreased recall to 0.59. The 3D FCOS detector did not outperform the U-Net methods in detecting either the small or large metastases presumably because of the limited data size. Conclusions: Our study provides the performances of four deep learning methods in relationship to lesion size, total metastasis volume, and number of lesions per patient, providing insight into further development of the deep learning networks.

9.
Ann Biomed Eng ; 49(2): 573-584, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32779056

RESUMEN

Prostate cancer (PCa) is a common, serious form of cancer in men that is still prevalent despite ongoing developments in diagnostic oncology. Current detection methods lead to high rates of inaccurate diagnosis. We present a method to directly model and exploit temporal aspects of temporal enhanced ultrasound (TeUS) for tissue characterization, which improves malignancy prediction. We employ a probabilistic-temporal framework, namely, hidden Markov models (HMMs), for modeling TeUS data obtained from PCa patients. We distinguish malignant from benign tissue by comparing the respective log-likelihood estimates generated by the HMMs. We analyze 1100 TeUS signals acquired from 12 patients. Our results show improved malignancy identification compared to previous results, demonstrating over 85% accuracy and AUC of 0.95. Incorporating temporal information directly into the models leads to improved tissue differentiation in PCa. We expect our method to generalize and be applied to other types of cancer in which temporal-ultrasound can be recorded.


Asunto(s)
Modelos Teóricos , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico , Humanos , Masculino , Cadenas de Markov , Ultrasonografía
10.
Med Image Anal ; 68: 101855, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33260116

RESUMEN

The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy. Finally, we present a multi-reader study showing that the predictive uncertainty is indicative of reader errors.


Asunto(s)
Artefactos , Imagen por Resonancia Magnética , Humanos , Aprendizaje Automático , Incertidumbre
11.
Sci Rep ; 11(1): 6876, 2021 03 25.
Artículo en Inglés | MEDLINE | ID: mdl-33767226

RESUMEN

With the rapid growth and increasing use of brain MRI, there is an interest in automated image classification to aid human interpretation and improve workflow. We aimed to train a deep convolutional neural network and assess its performance in identifying abnormal brain MRIs and critical intracranial findings including acute infarction, acute hemorrhage and mass effect. A total of 13,215 clinical brain MRI studies were categorized to training (74%), validation (9%), internal testing (8%) and external testing (8%) datasets. Up to eight contrasts were included from each brain MRI and each image volume was reformatted to common resolution to accommodate for differences between scanners. Following reviewing the radiology reports, three neuroradiologists assigned each study to abnormal vs normal, and identified three critical findings including acute infarction, acute hemorrhage, and mass effect. A deep convolutional neural network was constructed by a combination of localization feature extraction (LFE) modules and global classifiers to identify the presence of 4 variables in brain MRIs including abnormal, acute infarction, acute hemorrhage and mass effect. Training, validation and testing sets were randomly defined on a patient basis. Training was performed on 9845 studies using balanced sampling to address class imbalance. Receiver operating characteristic (ROC) analysis was performed. The ROC analysis of our models for 1050 studies within our internal test data showed AUC/sensitivity/specificity of 0.91/83%/86% for normal versus abnormal brain MRI, 0.95/92%/88% for acute infarction, 0.90/89%/81% for acute hemorrhage, and 0.93/93%/85% for mass effect. For 1072 studies within our external test data, it showed AUC/sensitivity/specificity of 0.88/80%/80% for normal versus abnormal brain MRI, 0.97/90%/97% for acute infarction, 0.83/72%/88% for acute hemorrhage, and 0.87/79%/81% for mass effect. Our proposed deep convolutional network can accurately identify abnormal and critical intracranial findings on individual brain MRIs, while addressing the fact that some MR contrasts might not be available in individual studies.


Asunto(s)
Encéfalo/anatomía & histología , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Imágenes de Resonancia Magnética Multiparamétrica/métodos , Redes Neurales de la Computación , Neuroimagen/métodos , Humanos , Curva ROC
12.
Neuroimage Clin ; 29: 102522, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33360973

RESUMEN

INTRODUCTION: During the last decade, a multitude of novel quantitative and semiquantitative MRI techniques have provided new information about the pathophysiology of neurological diseases. Yet, selection of the most relevant contrasts for a given pathology remains challenging. In this work, we developed and validated a method, Gated-Attention MEchanism Ranking of multi-contrast MRI in brain pathology (GAMER MRI), to rank the relative importance of MR measures in the classification of well understood ischemic stroke lesions. Subsequently, we applied this method to the classification of multiple sclerosis (MS) lesions, where the relative importance of MR measures is less understood. METHODS: GAMER MRI was developed based on the gated attention mechanism, which computes attention weights (AWs) as proxies of importance of hidden features in the classification. In the first two experiments, we used Trace-weighted (Trace), apparent diffusion coefficient (ADC), Fluid-Attenuated Inversion Recovery (FLAIR), and T1-weighted (T1w) images acquired in 904 acute/subacute ischemic stroke patients and in 6,230 healthy controls and patients with other brain pathologies to assess if GAMER MRI could produce clinically meaningful importance orders in two different classification scenarios. In the first experiment, GAMER MRI with a pretrained convolutional neural network (CNN) was used in conjunction with Trace, ADC, and FLAIR to distinguish patients with ischemic stroke from those with other pathologies and healthy controls. In the second experiment, GAMER MRI with a patch-based CNN used Trace, ADC and T1w to differentiate acute ischemic stroke lesions from healthy tissue. The last experiment explored the performance of patch-based CNN with GAMER MRI in ranking the importance of quantitative MRI measures to distinguish two groups of lesions with different pathological characteristics and unknown quantitative MR features. Specifically, GAMER MRI was applied to assess the relative importance of the myelin water fraction (MWF), quantitative susceptibility mapping (QSM), T1 relaxometry map (qT1), and neurite density index (NDI) in distinguishing 750 juxtacortical lesions from 242 periventricular lesions in 47 MS patients. Pair-wise permutation t-tests were used to evaluate the differences between the AWs obtained for each quantitative measure. RESULTS: In the first experiment, we achieved a mean test AUC of 0.881 and the obtained AWs of FLAIR and the sum of AWs of Trace and ADC were 0.11 and 0.89, respectively, as expected based on previous knowledge. In the second experiment, we achieved a mean test F1 score of 0.895 and a mean AW of Trace = 0.49, of ADC = 0.28, and of T1w = 0.23, thereby confirming the findings of the first experiment. In the third experiment, MS lesion classification achieved test balanced accuracy = 0.777, sensitivity = 0.739, and specificity = 0.814. The mean AWs of T1map, MWF, NDI, and QSM were 0.29, 0.26, 0.24, and 0.22 (p < 0.001), respectively. CONCLUSIONS: This work demonstrates that the proposed GAMER MRI might be a useful method to assess the relative importance of MRI measures in neurological diseases with focal pathology. Moreover, the obtained AWs may in fact help to choose the best combination of MR contrasts for a specific classification problem.


Asunto(s)
Isquemia Encefálica , Accidente Cerebrovascular , Encéfalo/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética , Accidente Cerebrovascular/diagnóstico por imagen
13.
Opt Express ; 18(22): 23435-41, 2010 Oct 25.
Artículo en Inglés | MEDLINE | ID: mdl-21164686

RESUMEN

Rodent models of retinal degenerative diseases are used by vision scientists to develop therapies and to understand mechanisms of disease progression. Measurement of changes to the thickness of the various retinal layers provides an objective metric to evaluate the performance of the therapy. Because invasive histology is terminal and provides only a single data point, non-invasive imaging modalities are required to better study progression, and to reduce the number of animals used in research. Optical Coherence Tomography (OCT) has emerged as a dominant imaging modality for human ophthalmic imaging, but has only recently gained significant attention for rodent retinal imaging. OCT provides cross section images of retina with micron-scale resolution which permits measurement of the retinal layer thickness. However, in order to be useful to vision scientists, a significant fraction of the retinal surface needs to be measured. In addition, because the retinal thickness normally varies as a function of distance from optic nerve head, it is critical to sample all regions of the retina in a systematic fashion. We present a longitudinal study of OCT to measure retinal degeneration in rats which have undergone optic nerve axotomy, a well characterized form of rapid retinal degeneration. Volumetric images of the retina acquired with OCT in a time course study were segmented in 2D using a semi-automatic segmentation algorithm. Then, using a 3D algorithm, thickness measurements were quantified across the surface of the retina for all volume segmentations. The resulting maps of the changes to retinal thickness over time represent the progression of degeneration across the surface of the retina during injury. The computational tools complement OCT retinal volumetric acquisition, resulting in a powerful tool for vision scientists working with rodents.


Asunto(s)
Degeneración Retiniana/patología , Tomografía de Coherencia Óptica/métodos , Algoritmos , Animales , Axotomía , Fondo de Ojo , Humanos , Procesamiento de Imagen Asistido por Computador , Ratas , Retina/patología
14.
Eur J Radiol ; 126: 108918, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32171914

RESUMEN

PURPOSE: To evaluate the performance of an artificial intelligence (AI) based software solution tested on liver volumetric analyses and to compare the results to the manual contour segmentation. MATERIALS AND METHODS: We retrospectively obtained 462 multiphasic CT datasets with six series for each patient: three different contrast phases and two slice thickness reconstructions (1.5/5 mm), totaling 2772 series. AI-based liver volumes were determined using multi-scale deep-reinforcement learning for 3D body markers detection and 3D structure segmentation. The algorithm was trained for liver volumetry on approximately 5000 datasets. We computed the absolute error of each automatically- and manually-derived volume relative to the mean manual volume. The mean processing time/dataset and method was recorded. Variations of liver volumes were compared using univariate generalized linear model analyses. A subgroup of 60 datasets was manually segmented by three radiologists, with a further subgroup of 20 segmented three times by each, to compare the automatically-derived results with the ground-truth. RESULTS: The mean absolute error of the automatically-derived measurement was 44.3 mL (representing 2.37 % of the averaged liver volumes). The liver volume was neither dependent on the contrast phase (p = 0.697), nor on the slice thickness (p = 0.446). The mean processing time/dataset with the algorithm was 9.94 s (sec) compared to manual segmentation with 219.34 s. We found an excellent agreement between both approaches with an ICC value of 0.996. CONCLUSION: The results of our study demonstrate that AI-powered fully automated liver volumetric analyses can be done with excellent accuracy, reproducibility, robustness, speed and agreement with the manual segmentation.


Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Hepatopatías/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Inteligencia Artificial , Aprendizaje Profundo , Humanos , Hígado/diagnóstico por imagen , Reproducibilidad de los Resultados , Estudios Retrospectivos
15.
Ann Biomed Eng ; 48(12): 3025, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32901381

RESUMEN

The authors have noted an omission in the original acknowledgements. The correct acknowledgements are as follows: Acknowledgements: This work was partially supported by Grants from NSERC Discovery to Hagit Shatkay and Parvin Mousavi, NSERC and CIHR CHRP to Parvin Mousavi and NIH R01 LM012527, NIH U54 GM104941, NSF IIS EAGER #1650851 & NSF HDR #1940080 to Hagit Shatkay.

16.
J Med Imaging (Bellingham) ; 6(1): 011003, 2019 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-30840715

RESUMEN

Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means of enabling tumor-targeted prostate cancer biopsy and treatment. However, intraoperative segmentation of TRUS images to define the three-dimensional (3-D) geometry of the prostate remains a necessary task in existing guidance systems, which often require significant manual interaction and are subject to interoperator variability. Therefore, automating this step would lead to more acceptable clinical workflows and greater standardization between different operators and hospitals. In this work, a convolutional neural network (CNN) for automatically segmenting the prostate in two-dimensional (2-D) TRUS slices of a 3-D TRUS volume was developed and tested. The network was designed to be able to incorporate 3-D spatial information by taking one or more TRUS slices neighboring each slice to be segmented as input, in addition to these slices. The accuracy of the CNN was evaluated on data from a cohort of 109 patients who had undergone TRUS-guided targeted biopsy, (a total of 4034 2-D slices). The segmentation accuracy was measured by calculating 2-D and 3-D Dice similarity coefficients, on the 2-D images and corresponding 3-D volumes, respectively, as well as the 2-D boundary distances, using a 10-fold patient-level cross-validation experiment. However, incorporating neighboring slices did not improve the segmentation performance in five out of six experiment results, which include varying the number of neighboring slices from 1 to 3 at either side. The up-sampling shortcuts reduced the overall training time of the network, 161 min compared with 253 min without the architectural addition.

17.
Med Image Anal ; 58: 101558, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31526965

RESUMEN

Convolutional neural networks (CNNs) have recently led to significant advances in automatic segmentations of anatomical structures in medical images, and a wide variety of network architectures are now available to the research community. For applications such as segmentation of the prostate in magnetic resonance images (MRI), the results of the PROMISE12 online algorithm evaluation platform have demonstrated differences between the best-performing segmentation algorithms in terms of numerical accuracy using standard metrics such as the Dice score and boundary distance. These small differences in the segmented regions/boundaries outputted by different algorithms may potentially have an unsubstantial impact on the results of downstream image analysis tasks, such as estimating organ volume and multimodal image registration, which inform clinical decisions. This impact has not been previously investigated. In this work, we quantified the accuracy of six different CNNs in segmenting the prostate in 3D patient T2-weighted MRI scans and compared the accuracy of organ volume estimation and MRI-ultrasound (US) registration errors using the prostate segmentations produced by different networks. Networks were trained and tested using a set of 232 patient MRIs with labels provided by experienced clinicians. A statistically significant difference was found among the Dice scores and boundary distances produced by these networks in a non-parametric analysis of variance (p < 0.001 and p < 0.001, respectively), where the following multiple comparison tests revealed that the statistically significant difference in segmentation errors were caused by at least one tested network. Gland volume errors (GVEs) and target registration errors (TREs) were then estimated using the CNN-generated segmentations. Interestingly, there was no statistical difference found in either GVEs or TREs among different networks, (p = 0.34 and p = 0.26, respectively). This result provides a real-world example that these networks with different segmentation performances may potentially provide indistinguishably adequate registration accuracies to assist prostate cancer imaging applications. We conclude by recommending that the differences in the accuracy of downstream image analysis tasks that make use of data output by automatic segmentation methods, such as CNNs, within a clinical pipeline should be taken into account when selecting between different network architectures, in addition to reporting the segmentation accuracy.


Asunto(s)
Imagen por Resonancia Magnética , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Ultrasonografía , Humanos , Masculino , Carga Tumoral
18.
Med Image Anal ; 44: 54-71, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29190576

RESUMEN

As the interaction between clinicians and computational processes increases in complexity, more nuanced mechanisms are required to describe how their communication is mediated. Medical image segmentation in particular affords a large number of distinct loci for interaction which can act on a deep, knowledge-driven level which complicates the naive interpretation of the computer as a symbol processing machine. Using the perspective of the computer as dialogue partner, we can motivate the semiotic understanding of medical image segmentation. Taking advantage of Peircean semiotic traditions and new philosophical inquiry into the structure and quality of metaphors, we can construct a unified framework for the interpretation of medical image segmentation as a sign exchange in which each sign acts as an interface metaphor. This allows for a notion of finite semiosis, described through a schematic medium, that can rigorously describe how clinicians and computers interpret the signs mediating their interaction. Altogether, this framework provides a unified approach to the understanding and development of medical image segmentation interfaces.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Interfaz Usuario-Computador , Aprendizaje Automático , Modelos Teóricos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Integración de Sistemas
19.
IEEE Trans Biomed Eng ; 65(8): 1798-1809, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29989922

RESUMEN

OBJECTIVES: Temporal enhanced ultrasound (TeUS) is a new ultrasound-based imaging technique that provides tissue-specific information. Recent studies have shown the potential of TeUS for improving tissue characterization in prostate cancer diagnosis. We study the temporal properties of TeUS-temporal order and length-and present a new framework to assess their impact on tissue information. METHODS: We utilize a probabilistic modeling approach using hidden Markov models (HMMs) to capture the temporal signatures of malignant and benign tissues from TeUS signals of nine patients. We model signals of benign and malignant tissues (284 and 286 signals, respectively) in their original temporal order as well as under order permutations. We then compare the resulting models using the Kullback-Liebler divergence and assess their performance differences in characterization. Moreover, we train HMMs using TeUS signals of different durations and compare their model performance when differentiating tissue types. RESULTS: Our findings demonstrate that models of order-preserved signals perform statistically significantly better (85% accuracy) in tissue characterization compared to models of order-altered signals (62% accuracy). The performance degrades as more changes in signal order are introduced. Additionally, models trained on shorter sequences perform as accurately as models of longer sequences. CONCLUSION: The work presented here strongly indicates that temporal order has substantial impact on TeUS performance; thus, it plays a significant role in conveying tissue-specific information. Furthermore, shorter TeUS signals can relay sufficient information to accurately distinguish between tissue types. SIGNIFICANCE: Understanding the impact of TeUS properties facilitates the process of its adopting in diagnostic procedures and provides insights on improving its acquisition.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Ultrasonografía/métodos , Humanos , Masculino , Cadenas de Markov , Sensibilidad y Especificidad , Procesos Estocásticos
20.
IEEE Trans Med Imaging ; 37(8): 1822-1834, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29994628

RESUMEN

Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.


Asunto(s)
Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Abdominal/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Sistema Digestivo/diagnóstico por imagen , Humanos , Riñón/diagnóstico por imagen , Bazo/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA