Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Magn Reson Med ; 88(1): 464-475, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35344602

RESUMEN

PURPOSE: Parallel RF transmission (PTx) is one of the key technologies enabling high quality imaging at ultra-high fields (≥7T). Compliance with regulatory limits on the local specific absorption rate (SAR) typically involves over-conservative safety margins to account for intersubject variability, which negatively affect the utilization of ultra-high field MR. In this work, we present a method to generate a subject-specific body model from a single T1-weighted dataset for personalized local SAR prediction in PTx neuroimaging at 7T. METHODS: Multi-contrast data were acquired at 7T (N = 10) to establish ground truth segmentations in eight tissue types. A 2.5D convolutional neural network was trained using the T1-weighted data as input in a leave-one-out cross-validation study. The segmentation accuracy was evaluated through local SAR simulations in a quadrature birdcage as well as a PTx coil model. RESULTS: The network-generated segmentations reached Dice coefficients of 86.7% ± 6.7% (mean ± SD) and showed to successfully address the severe intensity bias and contrast variations typical to 7T. Errors in peak local SAR obtained were below 3.0% in the quadrature birdcage. Results obtained in the PTx configuration indicated that a safety margin of 6.3% ensures conservative local SAR estimates in 95% of the random RF shims, compared to an average overestimation of 34% in the generic "one-size-fits-all" approach. CONCLUSION: A subject-specific body model can be automatically generated from a single T1-weighted dataset by means of deep learning, providing the necessary inputs for accurate and personalized local SAR predictions in PTx neuroimaging at 7T.


Asunto(s)
Imagen por Resonancia Magnética , Neuroimagen , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Fantasmas de Imagen
2.
NMR Biomed ; 35(9): e4746, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35466446

RESUMEN

Background suppression (BGS) in arterial spin labeling (ASL) magnetic resonance imaging leads to a higher temporal signal-to-noise ratio (tSNR) of the perfusion images compared with ASL without BGS. The performance of the BGS, however, depends on the tissue relaxation times and on inhomogeneities of the scanner's magnetic fields, which differ between subjects and are unknown at the moment of scanning. Therefore, we developed a feedback loop (FBL) mechanism that optimizes the BGS for each subject in the scanner during acquisition. We implemented the FBL for 2D pseudo-continuous ASL scans with an echo-planar imaging readout. After each dynamic scan, the acquired ASL images were automatically sent to an external computer and processed with a Python processing tool. Inversion times were optimized on the fly using 80 iterations of the Nelder-Mead method, by minimizing the signal intensity in the label image while maximizing the signal intensity in the perfusion image. The performance of this method was first tested in a four-component phantom. The regularization parameter was then tuned in six healthy subjects (three males, three females, age 24-62 years) and set as λ = 4 for all other experiments. The resulting ASL images, perfusion images, and tSNR maps obtained from the last 20 iterations of the FBL scan were compared with those obtained without BGS and with standard BGS in 12 healthy volunteers (five males, seven females, age 24-62 years) (including the six volunteers used for tuning of λ). The FBL resulted in perfusion images with a statistically significantly higher tSNR (2.20) compared with standard BGS (1.96) ( p < 5 x 10 - 3 , two-sided paired t-test). Minimizing signal in the label image furthermore resulted in control images, from which approximate changes in perfusion signal can directly be appreciated. This could be relevant to ASL applications that require a high temporal resolution. Future work is needed to minimize the number of initial acquisitions during which the performance of BGS is reduced compared with standard BGS, and to extend the technique to 3D ASL.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Circulación Cerebrovascular , Retroalimentación , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Masculino , Relación Señal-Ruido , Marcadores de Spin
3.
Neuroimage ; 178: 445-460, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29802968

RESUMEN

In recent years, machine learning approaches have been successfully applied to the field of neuroimaging for classification and regression tasks. However, many approaches do not give an intuitive relation between the raw features and the diagnosis. Therefore, they are difficult for clinicians to interpret. Moreover, most approaches treat the features extracted from the brain (for example, voxelwise gray matter concentration maps from brain MRI) as independent variables and ignore their spatial and anatomical relations. In this paper, we present a new Support Vector Machine (SVM)-based learning method for the classification of Alzheimer's disease (AD), which integrates spatial-anatomical information. In this way, spatial-neighbor features in the same anatomical region are encouraged to have similar weights in the SVM model. Secondly, we introduce a group lasso penalty to induce structure sparsity, which may help clinicians to assess the key regions involved in the disease. For solving this learning problem, we use an accelerated proximal gradient descent approach. We tested our method on the subset of ADNI data selected by Cuingnet et al. (2011) for Alzheimer's disease classification, as well as on an independent larger dataset from ADNI. Good classification performance is obtained for distinguishing cognitive normals (CN) vs. AD, as well as on distinguishing between various sub-types (e.g. CN vs. Mild Cognitive Impairment). The model trained on Cuignet's dataset for AD vs. CN classification was directly used without re-training to the independent larger dataset. Good performance was achieved, demonstrating the generalizability of the proposed methods. For all experiments, the classification results are comparable or better than the state-of-the-art, while the weight map more clearly indicates the key regions related to Alzheimer's disease.


Asunto(s)
Enfermedad de Alzheimer/clasificación , Enfermedad de Alzheimer/diagnóstico por imagen , Mapeo Encefálico/métodos , Interpretación de Imagen Asistida por Computador/métodos , Máquina de Vectores de Soporte , Anciano , Anciano de 80 o más Años , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Masculino , Persona de Mediana Edad
4.
Magn Reson Med ; 77(1): 422-433, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-26834001

RESUMEN

PURPOSE: To develop and validate a method for performing inter-station intensity standardization in multispectral whole-body MR data. METHODS: Different approaches for mapping the intensity of each acquired image stack into the reference intensity space were developed and validated. The registration strategies included: "direct" registration to the reference station (Strategy 1), "progressive" registration to the neighboring stations without (Strategy 2), and with (Strategy 3) using information from the overlap regions of the neighboring stations. For Strategy 3, two regularized modifications were proposed and validated. All methods were tested on two multispectral whole-body MR data sets: a multiple myeloma patients data set (48 subjects) and a whole-body MR angiography data set (33 subjects). RESULTS: For both data sets, all strategies showed significant improvement of intensity homogeneity with respect to vast majority of the validation measures (P < 0.005). Strategy 1 exhibited the best performance, closely followed by Strategy 2. Strategy 3 and its modifications were performing worse, in majority of the cases significantly (P < 0.05). CONCLUSIONS: We propose several strategies for performing inter-station intensity standardization in multispectral whole-body MR data. All the strategies were successfully applied to two types of whole-body MR data, and the "direct" registration strategy was concluded to perform the best. Magn Reson Med 77:422-433, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.


Asunto(s)
Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Imagen de Cuerpo Entero/métodos , Imagen de Cuerpo Entero/normas , Humanos , Imagenología Tridimensional , Angiografía por Resonancia Magnética , Mieloma Múltiple/diagnóstico por imagen , Reproducibilidad de los Resultados
5.
Neuroimage ; 125: 144-152, 2016 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-26458518

RESUMEN

With the wide access to studies of selected gene expressions in transgenic animals, mice have become the dominant species as cerebral disease models. Many of these studies are performed on animals of not more than eight weeks, declared as adult animals. Based on the earlier reports that full brain maturation requires at least three months in rats, there is a clear need to discern the corresponding minimal animal age to provide an "adult brain" in mice in order to avoid modulation of disease progression/therapy studies by ongoing developmental changes. For this purpose, we have studied anatomical brain alterations of mice during their first six months of age. Using T2-weighted and diffusion-weighted MRI, structural and volume changes of the brain were identified and compared with histological analysis of myelination. Mouse brain volume was found to be almost stable already at three weeks, but cortex thickness kept decreasing continuously with maximal changes during the first three months. Myelination is still increasing between three and six months, although most dramatic changes are over by three months. While our results emphasize that mice should be at least three months old when adult animals are needed for brain studies, preferred choice of one particular metric for future investigation goals will result in somewhat varying age windows of stabilization.


Asunto(s)
Encéfalo/crecimiento & desarrollo , Ratones/crecimiento & desarrollo , Animales , Imagen de Difusión por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador , Ratones Endogámicos C57BL , Neurogénesis/fisiología
6.
Neuroimage ; 84: 35-44, 2014 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-23994458

RESUMEN

Longitudinal studies on brain pathology and assessment of therapeutic strategies rely on a fully mature adult brain to exclude confounds of cerebral developmental changes. Thus, knowledge about onset of adulthood is indispensable for discrimination of developmental phase and adulthood. We have performed a high-resolution longitudinal MRI study at 11.7T of male Wistar rats between 21days and six months of age, characterizing cerebral volume changes and tissue-specific myelination as a function of age. Cortical thickness reaches final value at 1month, while volume increases of cortex, striatum and whole brain end only after two months. Myelin accretion is pronounced until the end of the third postnatal month. After this time, continuing myelination increases in cortex are still seen on histological analysis but are no longer reliably detectable with diffusion-weighted MRI due to parallel tissue restructuring processes. In conclusion, cerebral development continues over the first three months of age. This is of relevance for future studies on brain disease models which should not start before the end of month 3 to exclude serious confounds of continuing tissue development.


Asunto(s)
Envejecimiento/patología , Corteza Cerebral/anatomía & histología , Cuerpo Estriado/anatomía & histología , Fibras Nerviosas Mielínicas/ultraestructura , Envejecimiento/fisiología , Animales , Corteza Cerebral/fisiología , Cuerpo Estriado/fisiología , Imagen de Difusión Tensora , Masculino , Fibras Nerviosas Mielínicas/fisiología , Tamaño de los Órganos , Ratas , Ratas Wistar
7.
Med Phys ; 51(5): 3555-3565, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38167996

RESUMEN

BACKGROUND: Magnetic Resonance acquisition is a time consuming process, making it susceptible to patient motion during scanning. Even motion in the order of a millimeter can introduce severe blurring and ghosting artifacts, potentially necessitating re-acquisition. Magnetic Resonance Imaging (MRI) can be accelerated by acquiring only a fraction of k-space, combined with advanced reconstruction techniques leveraging coil sensitivity profiles and prior knowledge. Artificial intelligence (AI)-based reconstruction techniques have recently been popularized, but generally assume an ideal setting without intra-scan motion. PURPOSE: To retrospectively detect and quantify the severity of motion artifacts in undersampled MRI data. This may prove valuable as a safety mechanism for AI-based approaches, provide useful information to the reconstruction method, or prompt for re-acquisition while the patient is still in the scanner. METHODS: We developed a deep learning approach that detects and quantifies motion artifacts in undersampled brain MRI. We demonstrate that synthetically motion-corrupted data can be leveraged to train the convolutional neural network (CNN)-based motion artifact estimator, generalizing well to real-world data. Additionally, we leverage the motion artifact estimator by using it as a selector for a motion-robust reconstruction model in case a considerable amount of motion was detected, and a high data consistency model otherwise. RESULTS: Training and validation were performed on 4387 and 1304 synthetically motion-corrupted images and their uncorrupted counterparts, respectively. Testing was performed on undersampled in vivo motion-corrupted data from 28 volunteers, where our model distinguished head motion from motion-free scans with 91% and 96% accuracy when trained on synthetic and on real data, respectively. It predicted a manually defined quality label ('Good', 'Medium' or 'Bad' quality) correctly in 76% and 85% of the time when trained on synthetic and real data, respectively. When used as a selector it selected the appropriate reconstruction network 93% of the time, achieving near optimal SSIM values. CONCLUSIONS: The proposed method quantified motion artifact severity in undersampled MRI data with high accuracy, enabling real-time motion artifact detection that can help improve the safety and quality of AI-based reconstructions.


Asunto(s)
Artefactos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Movimiento , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Inteligencia Artificial , Encéfalo/diagnóstico por imagen , Aprendizaje Profundo
8.
Nat Rev Rheumatol ; 20(3): 182-195, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38332242

RESUMEN

Artificial intelligence techniques, specifically deep learning, have already affected daily life in a wide range of areas. Likewise, initial applications have been explored in rheumatology. Deep learning might not easily surpass the accuracy of classic techniques when performing classification or regression on low-dimensional numerical data. With images as input, however, deep learning has become so successful that it has already outperformed the majority of conventional image-processing techniques developed during the past 50 years. As with any new imaging technology, rheumatologists and radiologists need to consider adapting their arsenal of diagnostic, prognostic and monitoring tools, and even their clinical role and collaborations. This adaptation requires a basic understanding of the technical background of deep learning, to efficiently utilize its benefits but also to recognize its drawbacks and pitfalls, as blindly relying on deep learning might be at odds with its capabilities. To facilitate such an understanding, it is necessary to provide an overview of deep-learning techniques for automatic image analysis in detecting, quantifying, predicting and monitoring rheumatic diseases, and of currently published deep-learning applications in radiological imaging for rheumatology, with critical assessment of possible limitations, errors and confounders, and conceivable consequences for rheumatologists and radiologists in clinical practice.


Asunto(s)
Aprendizaje Profundo , Enfermedades Reumáticas , Reumatología , Humanos , Inteligencia Artificial , Diagnóstico por Imagen , Enfermedades Reumáticas/diagnóstico por imagen
9.
Artículo en Inglés | MEDLINE | ID: mdl-38194372

RESUMEN

Ensembles of contours arise in various applications like simulation, computer-aided design, and semantic segmentation. Uncovering ensemble patterns and analyzing individual members is a challenging task that suffers from clutter. Ensemble statistical summarization can alleviate this issue by permitting analyzing ensembles' distributional components like the mean and median, confidence intervals, and outliers. Contour boxplots, powered by Contour Band Depth (CBD), are a popular non-parametric ensemble summarization method that benefits from CBD's generality, robustness, and theoretical properties. In this work, we introduce Inclusion Depth (ID), a new notion of contour depth with three defining characteristics. First, ID is a generalization of functional Half-Region Depth, which offers several theoretical guarantees. Second, ID relies on a simple principle: the inside/outside relationships between contours. This facilitates implementing ID and understanding its results. Third, the computational complexity of ID scales quadratically in the number of members of the ensemble, improving CBD's cubic complexity. This also in practice speeds up the computation enabling the use of ID for exploring large contour ensembles or in contexts requiring multiple depth evaluations like clustering. In a series of experiments on synthetic data and case studies with meteorological and segmentation data, we evaluate ID's performance and demonstrate its capabilities for the visual analysis of contour ensembles.

10.
Phys Imaging Radiat Oncol ; 30: 100572, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38633281

RESUMEN

Background and purpose: Retrospective dose evaluation for organ-at-risk auto-contours has previously used small cohorts due to additional manual effort required for treatment planning on auto-contours. We aimed to do this at large scale, by a) proposing and assessing an automated plan optimization workflow that used existing clinical plan parameters and b) using it for head-and-neck auto-contour dose evaluation. Materials and methods: Our automated workflow emulated our clinic's treatment planning protocol and reused existing clinical plan optimization parameters. This workflow recreated the original clinical plan (POG) with manual contours (PMC) and evaluated the dose effect (POG-PMC) on 70 photon and 30 proton plans of head-and-neck patients. As a use-case, the same workflow (and parameters) created a plan using auto-contours (PAC) of eight head-and-neck organs-at-risk from a commercial tool and evaluated their dose effect (PMC-PAC). Results: For plan recreation (POG-PMC), our workflow had a median impact of 1.0% and 1.5% across dose metrics of auto-contours, for photon and proton respectively. Computer time of automated planning was 25% (photon) and 42% (proton) of manual planning time. For auto-contour evaluation (PMC-PAC), we noticed an impact of 2.0% and 2.6% for photon and proton radiotherapy. All evaluations had a median ΔNTCP (Normal Tissue Complication Probability) less than 0.3%. Conclusions: The plan replication capability of our automated program provides a blueprint for other clinics to perform auto-contour dose evaluation with large patient cohorts. Finally, despite geometric differences, auto-contours had a minimal median dose impact, hence inspiring confidence in their utility and facilitating their clinical adoption.

11.
Otolaryngol Head Neck Surg ; 169(6): 1582-1589, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37555251

RESUMEN

OBJECTIVE: Validation of automated 2-dimensional (2D) diameter measurements of vestibular schwannomas on magnetic resonance imaging (MRI). STUDY DESIGN: Retrospective validation study using 2 data sets containing MRIs of vestibular schwannoma patients. SETTING: University Hospital in The Netherlands. METHODS: Two data sets were used, 1 containing 1 scan per patient (n = 134) and the other containing at least 3 consecutive MRIs of 51 patients, all with contrast-enhanced T1 or high-resolution T2 sequences. 2D measurements of the maximal extrameatal diameters in the axial plane were automatically derived from a 3D-convolutional neural network compared to manual measurements by 2 human observers. Intra- and interobserver variabilities were calculated using the intraclass correlation coefficient (ICC), agreement on tumor progression using Cohen's kappa. RESULTS: The human intra- and interobserver variability showed a high correlation (ICC: 0.98-0.99) and limits of agreement of 1.7 to 2.1 mm. Comparing the automated to human measurements resulted in ICC of 0.98 (95% confidence interval [CI]: 0.974; 0.987) and 0.97 (95% CI: 0.968; 0.984), with limits of agreement of 2.2 and 2.1 mm for diameters parallel and perpendicular to the posterior side of the temporal bone, respectively. There was satisfactory agreement on tumor progression between automated measurements and human observers (Cohen's κ = 0.77), better than the agreement between the human observers (Cohen's κ = 0.74). CONCLUSION: Automated 2D diameter measurements and growth detection of vestibular schwannomas are at least as accurate as human 2D measurements. In clinical practice, measurements of the maximal extrameatal tumor (2D) diameters of vestibular schwannomas provide important complementary information to total tumor volume (3D) measurements. Combining both in an automated measurement algorithm facilitates clinical adoption.


Asunto(s)
Neuroma Acústico , Humanos , Neuroma Acústico/diagnóstico por imagen , Neuroma Acústico/patología , Inteligencia Artificial , Estudios Retrospectivos , Algoritmos , Imagen por Resonancia Magnética/métodos , Reproducibilidad de los Resultados
12.
Pulm Circ ; 13(2): e12223, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37128354

RESUMEN

The shape and distribution of vascular lesions in pulmonary embolism (PE) and chronic thromboembolic pulmonary hypertension (CTEPH) are different. We investigated whether automated quantification of pulmonary vascular morphology and densitometry in arteries and veins imaged by computed tomographic pulmonary angiography (CTPA) could distinguish PE from CTEPH. We analyzed CTPA images from a cohort of 16 PE patients, 6 CTEPH patients, and 15 controls. Pulmonary vessels were extracted with a graph-cut method, and separated into arteries and veins using deep-learning classification. Vascular morphology was quantified by the slope (α) and intercept (ß) of the vessel radii distribution. To quantify lung perfusion defects, the median pulmonary vascular density was calculated. By combining these measurements with densities measured in parenchymal areas, pulmonary trunk, and descending aorta, a static perfusion curve was constructed. All separate quantifications were compared between the three groups. No vascular morphology differences were detected in contrast to vascular density values. The median vascular density (interquartile range) was -567 (113), -452 (95), and -470 (323) HU, for the control, PE, and CTEPH group. The static perfusion curves showed different patterns between groups, with a statistically significant difference in aorta-pulmonary trunk gradient between the PE and CTEPH groups (p = 0.008). In this proof of concept study, not vasculature morphology but densities differentiated between patients of three groups. Further technical improvements are needed to allow for accurate differentiation between PE and CTEPH, which in this study was only possible statistically by measuring the density gradient between aorta and pulmonary trunk.

13.
Sci Rep ; 12(1): 1822, 2022 02 02.
Artículo en Inglés | MEDLINE | ID: mdl-35110676

RESUMEN

For image-guided small animal irradiations, the whole workflow of imaging, organ contouring, irradiation planning, and delivery is typically performed in a single session requiring continuous administration of anaesthetic agents. Automating contouring leads to a faster workflow, which limits exposure to anaesthesia and thereby, reducing its impact on experimental results and on animal wellbeing. Here, we trained the 2D and 3D U-Net architectures of no-new-Net (nnU-Net) for autocontouring of the thorax in mouse micro-CT images. We trained the models only on native CTs and evaluated their performance using an independent testing dataset (i.e., native CTs not included in the training and validation). Unlike previous studies, we also tested the model performance on an external dataset (i.e., contrast-enhanced CTs) to see how well they predict on CTs completely different from what they were trained on. We also assessed the interobserver variability using the generalized conformity index ([Formula: see text]) among three observers, providing a stronger human baseline for evaluating automated contours than previous studies. Lastly, we showed the benefit on the contouring time compared to manual contouring. The results show that 3D models of nnU-Net achieve superior segmentation accuracy and are more robust to unseen data than 2D models. For all target organs, the mean surface distance (MSD) and the Hausdorff distance (95p HD) of the best performing model for this task (nnU-Net 3d_fullres) are within 0.16 mm and 0.60 mm, respectively. These values are below the minimum required contouring accuracy of 1 mm for small animal irradiations, and improve significantly upon state-of-the-art 2D U-Net-based AIMOS method. Moreover, the conformity indices of the 3d_fullres model also compare favourably to the interobserver variability for all target organs, whereas the 2D models perform poorly in this regard. Importantly, the 3d_fullres model offers 98% reduction in contouring time.


Asunto(s)
Aprendizaje Profundo , Interpretación de Imagen Radiográfica Asistida por Computador , Radiografía Torácica , Tórax/diagnóstico por imagen , Microtomografía por Rayos X , Animales , Femenino , Ratones Endogámicos BALB C , Variaciones Dependientes del Observador , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados , Flujo de Trabajo
14.
Radiol Artif Intell ; 4(4): e210300, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35923375

RESUMEN

Purpose: To develop automated vestibular schwannoma measurements on contrast-enhanced T1- and T2-weighted MRI scans. Materials and Methods: MRI data from 214 patients in 37 different centers were retrospectively analyzed between 2020 and 2021. Patients with hearing loss (134 positive for vestibular schwannoma [mean age ± SD, 54 years ± 12;64 men] and 80 negative for vestibular schwannoma) were randomly assigned to a training and validation set and to an independent test set. A convolutional neural network (CNN) was trained using fivefold cross-validation for two models (T1 and T2). Quantitative analysis, including Dice index, Hausdorff distance, surface-to-surface distance (S2S), and relative volume error, was used to compare the computer and the human delineations. An observer study was performed in which two experienced physicians evaluated both delineations. Results: The T1-weighted model showed state-of-the-art performance, with a mean S2S distance of less than 0.6 mm for the whole tumor and the intrameatal and extrameatal tumor parts. The whole tumor Dice index and Hausdorff distance were 0.92 and 2.1 mm in the independent test set, respectively. T2-weighted images had a mean S2S distance less than 0.6 mm for the whole tumor and the intrameatal and extrameatal tumor parts. The whole tumor Dice index and Hausdorff distance were 0.87 and 1.5 mm in the independent test set. The observer study indicated that the tool was similar to human delineations in 85%-92% of cases. Conclusion: The CNN model detected and delineated vestibular schwannomas accurately on contrast-enhanced T1- and T2-weighted MRI scans and distinguished the clinically relevant difference between intrameatal and extrameatal tumor parts.Keywords: MRI, Ear, Nose, and Throat, Skull Base, Segmentation, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms Supplemental material is available for this article. © RSNA, 2022.

15.
Med Phys ; 48(6): 2877-2890, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-33656213

RESUMEN

PURPOSE: Efficient compression of images while preserving image quality has the potential to be a major enabler of effective remote clinical diagnosis and treatment, since poor Internet connection conditions are often the primary constraint in such services. This paper presents a framework for organ-specific image compression for teleinterventions based on a deep learning approach and anisotropic diffusion filter. METHODS: The proposed method, deep learning and anisotropic diffusion (DLAD), uses a convolutional neural network architecture to extract a probability map for the organ of interest; this probability map guides an anisotropic diffusion filter that smooths the image except at the location of the organ of interest. Subsequently, a compression method, such as BZ2 and HEVC-visually lossless, is applied to compress the image. We demonstrate the proposed method on three-dimensional (3D) CT images acquired for radio frequency ablation (RFA) of liver lesions. We quantitatively evaluate the proposed method on 151 CT images using peak-signal-to-noise ratio ( PSNR ), structural similarity ( SSIM ), and compression ratio ( CR ) metrics. Finally, we compare the assessments of two radiologists on the liver lesion detection and the liver lesion center annotation using 33 sets of the original images and the compressed images. RESULTS: The results show that the method can significantly improve CR of most well-known compression methods. DLAD combined with HEVC-visually lossless achieves the highest average CR of 6.45, which is 36% higher than that of the original HEVC and outperforms other state-of-the-art lossless medical image compression methods. The means of PSNR and SSIM are 70 dB and 0.95, respectively. In addition, the compression effects do not statistically significantly affect the assessments of the radiologists on the liver lesion detection and the lesion center annotation. CONCLUSIONS: We thus conclude that the method has a high potential to be applied in teleintervention applications.


Asunto(s)
Compresión de Datos , Anisotropía , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Redes Neurales de la Computación , Relación Señal-Ruido
16.
Med Phys ; 37(2): 714-23, 2010 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-20229881

RESUMEN

PURPOSE: Thoracic computed tomography (CT) scans provide information about cardiovascular risk status. These scans are non-ECG synchronized, thus precise quantification of coronary calcifications is difficult. Aortic calcium scoring is less sensitive to cardiac motion, so it is an alternative to coronary calcium scoring as an indicator of cardiovascular risk. The authors developed and evaluated a computer-aided system for automatic detection and quantification of aortic calcifications in low-dose noncontrast-enhanced chest CT. METHODS: The system was trained and tested on scans from participants of a lung cancer screening trial. A total of 433 low-dose, non-ECG-synchronized, noncontrast-enhanced 16 detector row examinations of the chest was randomly divided into 340 training and 93 test data sets. A first observer manually identified aortic calcifications on training and test scans. A second observer did the same on the test scans only. First, a multiatlas-based segmentation method was developed to delineate the aorta. Segmented volume was thresholded and potential calcifications (candidate objects) were extracted by three-dimensional connected component labeling. Due to image resolution and noise, in rare cases extracted candidate objects were connected to the spine. They were separated into a part outside and parts inside the aorta, and only the latter was further analyzed. All candidate objects were represented by 63 features describing their size, position, and texture. Subsequently, a two-stage classification with a selection of features and k-nearest neighbor classifiers was performed. Based on the detected aortic calcifications, total calcium volume score was determined for each subject. RESULTS: The computer system correctly detected, on the average, 945 mm3 out of 965 mm3 (97.9%) calcified plaque volume in the aorta with an average of 64 mm3 of false positive volume per scan. Spearman rank correlation coefficient was p = 0.960 between the system and the first observer compared to p = 0.961 between the two observers. CONCLUSIONS: Automatic calcium scoring in the aorta thus appears feasible with good correlation between manual and automatic scoring.


Asunto(s)
Algoritmos , Enfermedades de la Aorta/diagnóstico por imagen , Aortografía/métodos , Calcinosis/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Torácica/métodos , Tomografía Computarizada por Rayos X/métodos , Enfermedades de la Aorta/complicaciones , Inteligencia Artificial , Calcinosis/complicaciones , Humanos , Neoplasias Pulmonares/complicaciones , Intensificación de Imagen Radiográfica/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
17.
IEEE Trans Med Imaging ; 38(10): 2314-2325, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-30762536

RESUMEN

Stochastic gradient descent (SGD) is commonly used to solve (parametric) image registration problems. In the case of badly scaled problems, SGD, however, only exhibits sublinear convergence properties. In this paper, we propose an efficient preconditioner estimation method to improve the convergence rate of SGD. Based on the observed distribution of voxel displacements in the registration, we estimate the diagonal entries of a preconditioning matrix, thus rescaling the optimization cost function. The preconditioner is efficient to compute and employ and can be used for mono-modal as well as multi-modal cost functions, in combination with different transformation models, such as the rigid, the affine, and the B-spline model. Experiments on different clinical datasets show that the proposed method, indeed, improves the convergence rate compared with SGD with speedups around 2~5 in all tested settings while retaining the same level of registration accuracy.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Humanos , Pulmón/diagnóstico por imagen , Procesos Estocásticos , Tomografía Computarizada por Rayos X
18.
Front Oncol ; 9: 1297, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31828037

RESUMEN

Objective: Our goal was to investigate the performance of an open source deformable image registration package, elastix, for fast and robust contour propagation in the context of online-adaptive intensity-modulated proton therapy (IMPT) for prostate cancer. Methods: A planning and 7-10 repeat CT scans were available of 18 prostate cancer patients. Automatic contour propagation of repeat CT scans was performed using elastix and compared with manual delineations in terms of geometric accuracy and runtime. Dosimetric accuracy was quantified by generating IMPT plans using the propagated contours expanded with a 2 mm (prostate) and 3.5 mm margin (seminal vesicles and lymph nodes) and calculating dosimetric coverage based on the manual delineation. A coverage of V 95% ≥ 98% (at least 98% of the target volumes receive at least 95% of the prescribed dose) was considered clinically acceptable. Results: Contour propagation runtime varied between 3 and 30 s for different registration settings. For the fastest setting, 83 in 93 (89.2%), 73 in 93 (78.5%), and 91 in 93 (97.9%) registrations yielded clinically acceptable dosimetric coverage of the prostate, seminal vesicles, and lymph nodes, respectively. For the prostate, seminal vesicles, and lymph nodes the Dice Similarity Coefficient (DSC) was 0.87 ± 0.05, 0.63 ± 0.18, and 0.89 ± 0.03 and the mean surface distance (MSD) was 1.4 ± 0.5 mm, 2.0 ± 1.2 mm, and 1.5 ± 0.4 mm, respectively. Conclusion: With a dosimetric success rate of 78.5-97.9%, this software may facilitate online adaptive IMPT of prostate cancer using a fast, free and open implementation.

19.
Med Image Anal ; 56: 110-121, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31226661

RESUMEN

Predicting registration error can be useful for evaluation of registration procedures, which is important for the adoption of registration techniques in the clinic. In addition, quantitative error prediction can be helpful in improving the registration quality. The task of predicting registration error is demanding due to the lack of a ground truth in medical images. This paper proposes a new automatic method to predict the registration error in a quantitative manner, and is applied to chest CT scans. A random regression forest is utilized to predict the registration error locally. The forest is built with features related to the transformation model and features related to the dissimilarity after registration. The forest is trained and tested using manually annotated corresponding points between pairs of chest CT scans in two experiments: SPREAD (trained and tested on SPREAD) and inter-database (including three databases SPREAD, DIR-Lab-4DCT and DIR-Lab-COPDgene). The results show that the mean absolute errors of regression are 1.07  ±â€¯ 1.86 and 1.76  ±â€¯ 2.59 mm for the SPREAD and inter-database experiment, respectively. The overall accuracy of classification in three classes (correct, poor and wrong registration) is 90.7% and 75.4%, for SPREAD and inter-database respectively. The good performance of the proposed method enables important applications such as automatic quality control in large-scale image analysis.


Asunto(s)
Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Torácica , Análisis de Regresión , Tomografía Computarizada por Rayos X , Algoritmos , Automatización , Humanos , Incertidumbre
20.
Med Image Anal ; 52: 128-143, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30579222

RESUMEN

Image registration, the process of aligning two or more images, is the core technique of many (semi-)automatic medical image analysis tasks. Recent studies have shown that deep learning methods, notably convolutional neural networks (ConvNets), can be used for image registration. Thus far training of ConvNets for registration was supervised using predefined example registrations. However, obtaining example registrations is not trivial. To circumvent the need for predefined examples, and thereby to increase convenience of training ConvNets for image registration, we propose the Deep Learning Image Registration (DLIR) framework for unsupervised affine and deformable image registration. In the DLIR framework ConvNets are trained for image registration by exploiting image similarity analogous to conventional intensity-based image registration. After a ConvNet has been trained with the DLIR framework, it can be used to register pairs of unseen images in one shot. We propose flexible ConvNets designs for affine image registration and for deformable image registration. By stacking multiple of these ConvNets into a larger architecture, we are able to perform coarse-to-fine image registration. We show for registration of cardiac cine MRI and registration of chest CT that performance of the DLIR framework is comparable to conventional image registration while being several orders of magnitude faster.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Cinemagnética/métodos , Tomografía Computarizada por Rayos X/métodos , Aprendizaje Automático no Supervisado , Cardiopatías/diagnóstico por imagen , Humanos , Imagenología Tridimensional , Redes Neurales de la Computación , Radiografía Torácica/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA