RESUMO
High-resolution volume reconstruction from multiple motion-corrupted stacks of 2D slices plays an increasing role for fetal brain Magnetic Resonance Imaging (MRI) studies. Currently existing reconstruction methods are time-consuming and often require user interactions to localize and extract the brain from several stacks of 2D slices. We propose a fully automatic framework for fetal brain reconstruction that consists of four stages: 1) fetal brain localization based on a coarse segmentation by a Convolutional Neural Network (CNN), 2) fine segmentation by another CNN trained with a multi-scale loss function, 3) novel, single-parameter outlier-robust super-resolution reconstruction, and 4) fast and automatic high-resolution visualization in standard anatomical space suitable for pathological brains. We validated our framework with images from fetuses with normal brains and with variable degrees of ventriculomegaly associated with open spina bifida, a congenital malformation affecting also the brain. Experiments show that each step of our proposed pipeline outperforms state-of-the-art methods in both segmentation and reconstruction comparisons including expert-reader quality assessments. The reconstruction results of our proposed method compare favorably with those obtained by manual, labor-intensive brain segmentation, which unlocks the potential use of automatic fetal brain reconstruction studies in clinical practice.
Assuntos
Encéfalo/diagnóstico por imagem , Feto/diagnóstico por imagem , Hidrocefalia/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Espinha Bífida Cística/diagnóstico por imagem , Aprendizado Profundo , Feminino , Terapias Fetais , Idade Gestacional , Humanos , Redes Neurais de Computação , Gravidez , Espinha Bífida Cística/cirurgiaRESUMO
Purpose To compare lobar ventilation and apparent diffusion coefficient (ADC) values obtained with hyperpolarized xenon 129 (129Xe) magnetic resonance (MR) imaging to quantitative computed tomography (CT) metrics on a lobar basis and pulmonary function test (PFT) results on a whole-lung basis in patients with chronic obstructive pulmonary disease (COPD). Materials and Methods The study was approved by the National Research Ethics Service Committee; written informed consent was obtained from all patients. Twenty-two patients with COPD (Global Initiative for Chronic Obstructive Lung Disease stage II-IV) underwent hyperpolarized 129Xe MR imaging at 1.5 T, quantitative CT, and PFTs. Whole-lung and lobar 129Xe MR imaging parameters were obtained by using automated segmentation of multisection hyperpolarized 129Xe MR ventilation images and hyperpolarized 129Xe MR diffusion-weighted images after coregistration to CT scans. Whole-lung and lobar quantitative CT-derived metrics for emphysema and bronchial wall thickness were calculated. Pearson correlation coefficients were used to evaluate the relationship between imaging measures and PFT results. Results Percentage ventilated volume and average ADC at lobar 129Xe MR imaging showed correlation with percentage emphysema at lobar quantitative CT (r = -0.32, P < .001 and r = 0.75, P < .0001, respectively). The average ADC at whole-lung 129Xe MR imaging showed moderate correlation with PFT results (percentage predicted transfer factor of the lung for carbon monoxide [Tlco]: r = -0.61, P < .005) and percentage predicted functional residual capacity (r = 0.47, P < .05). Whole-lung quantitative CT percentage emphysema also showed statistically significant correlation with percentage predicted Tlco (r = -0.65, P < .005). Conclusion Lobar ventilation and ADC values obtained from hyperpolarized 129Xe MR imaging demonstrated correlation with quantitative CT percentage emphysema on a lobar basis and with PFT results on a whole-lung basis. © RSNA, 2016.
Assuntos
Imageamento por Ressonância Magnética/métodos , Doença Pulmonar Obstrutiva Crônica/diagnóstico por imagem , Isótopos de Xenônio , Idoso , Feminino , Humanos , Pulmão/diagnóstico por imagem , Pulmão/patologia , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Doença Pulmonar Obstrutiva Crônica/patologia , Reprodutibilidade dos TestesRESUMO
Asthma and chronic obstructive pulmonary disease (COPD) are characterized by airway obstruction and airflow imitation and pose a huge burden to society. These obstructive lung diseases impact the lung physiology across multiple biological scales. Environmental stimuli are introduced via inhalation at the organ scale, and consequently impact upon the tissue, cellular and sub-cellular scale by triggering signaling pathways. These changes are propagated upwards to the organ level again and vice versa. In order to understand the pathophysiology behind these diseases we need to integrate and understand changes occurring across these scales and this is the driving force for multiscale computational modeling. There is an urgent need for improved diagnosis and assessment of obstructive lung diseases. Standard clinical measures are based on global function tests which ignore the highly heterogeneous regional changes that are characteristic of obstructive lung disease pathophysiology. Advances in scanning technology such as hyperpolarized gas MRI has led to new regional measurements of ventilation, perfusion and gas diffusion in the lungs, while new image processing techniques allow these measures to be combined with information from structural imaging such as Computed Tomography (CT). However, it is not yet known how to derive clinical measures for obstructive diseases from this wealth of new data. Computational modeling offers a powerful approach for investigating this relationship between imaging measurements and disease severity, and understanding the effects of different disease subtypes, which is key to developing improved diagnostic methods. Gaining an understanding of a system as complex as the respiratory system is difficult if not impossible via experimental methods alone. Computational models offer a complementary method to unravel the structure-function relationships occurring within a multiscale, multiphysics system such as this. Here we review the currentstate-of-the-art in techniques developed for pulmonary image analysis, development of structural models of therespiratory system and predictions of function within these models. We discuss application of modeling techniques to obstructive lung diseases, namely asthma and emphysema and the use of models to predict response to therapy. Finally we introduce a large European project, AirPROM that is developing multiscale models toinvestigate structure-function relationships in asthma and COPD.
Assuntos
Asma/fisiopatologia , Pulmão/fisiopatologia , Doença Pulmonar Obstrutiva Crônica/fisiopatologia , Algoritmos , Simulação por Computador , Enfisema/patologia , Europa (Continente) , Humanos , Pulmão/fisiologia , Pneumopatias Obstrutivas/fisiopatologia , Imageamento por Ressonância Magnética , Respiração , Transdução de Sinais , Tomografia Computadorizada por Raios XRESUMO
Accurate medical image segmentation is essential for diagnosis, surgical planning and many other applications. Convolutional Neural Networks (CNNs) have become the state-of-the-art automatic segmentation methods. However, fully automatic results may still need to be refined to become accurate and robust enough for clinical use. We propose a deep learning-based interactive segmentation method to improve the results obtained by an automatic CNN and to reduce user interactions during refinement for higher accuracy. We use one CNN to obtain an initial automatic segmentation, on which user interactions are added to indicate mis-segmentations. Another CNN takes as input the user interactions with the initial segmentation and gives a refined result. We propose to combine user interactions with CNNs through geodesic distance transforms, and propose a resolution-preserving network that gives a better dense prediction. In addition, we integrate user interactions as hard constraints into a back-propagatable Conditional Random Field. We validated the proposed framework in the context of 2D placenta segmentation from fetal MRI and 3D brain tumor segmentation from FLAIR images. Experimental results show our method achieves a large improvement from automatic CNNs, and obtains comparable and even higher accuracy with fewer user interventions and less time compared with traditional interactive methods.
Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Neoplasias Encefálicas/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Placenta/diagnóstico por imagem , GravidezRESUMO
PURPOSE: Recent improvements in lung cancer survival have spurred an interest in understanding and minimizing long-term radiation-induced lung damage (RILD). However, there are still no objective criteria to quantify RILD, leading to variable reporting across centers and trials. We propose a set of objective imaging biomarkers for quantifying common radiologic findings observed 12 months after lung cancer radiation therapy. METHODS AND MATERIALS: Baseline and 12-month computed tomography (CT) scans of 27 patients from a phase 1/2 clinical trial of isotoxic chemoradiation were included in this study. To detect and measure the severity of RILD, 12 quantitative imaging biomarkers were developed. The biomarkers describe basic CT findings, including parenchymal change, volume reduction, and pleural change. The imaging biomarkers were implemented as semiautomated image analysis pipelines and were assessed against visual assessment of the occurrence of each change. RESULTS: Most of the biomarkers were measurable in each patient. The continuous nature of the biomarkers allows objective scoring of severity for each patient. For each imaging biomarker, the cohort was split into 2 groups according to the presence or absence of the biomarker by visual assessment, testing the hypothesis that the imaging biomarkers were different in the 2 groups. All features were statistically significant except for rotation of the main bronchus and diaphragmatic curvature. Most of the biomarkers were not strongly correlated with each other, suggesting that each of the biomarkers is measuring a separate element of RILD pathology. CONCLUSIONS: We developed objective CT-based imaging biomarkers that quantify the severity of radiologic lung damage after radiation therapy. These biomarkers are representative of typical radiologic findings of RILD.
Assuntos
Quimiorradioterapia/efeitos adversos , Neoplasias Pulmonares/terapia , Pulmão/efeitos da radiação , Lesões por Radiação/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Idoso , Idoso de 80 Anos ou mais , Biomarcadores , Feminino , Humanos , Masculino , Pessoa de Meia-IdadeRESUMO
Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes (a.k.a. zero-shot learning). To address these problems, we propose a novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine tuning. We applied this framework to two applications: 2-D segmentation of multiple organs from fetal magnetic resonance (MR) slices, where only two types of these organs were annotated for training and 3-D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only the tumor core in one MR sequence was annotated for training. Experimental results show that: 1) our model is more robust to segment previously unseen objects than state-of-the-art CNNs; 2) image-specific fine tuning with the proposed weighted loss function significantly improves segmentation accuracy; and 3) our method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods.
Assuntos
Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem , Feminino , Feto/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Gravidez , Diagnóstico Pré-Natal/métodosRESUMO
BACKGROUND AND OBJECTIVES: Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. METHODS: The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. RESULTS: We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. CONCLUSIONS: The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.
Assuntos
Diagnóstico por Imagem/métodos , Aprendizado de Máquina , Abdome/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Simulação por Computador , Bases de Dados Factuais , Diagnóstico por Imagem/instrumentação , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , UltrassonografiaRESUMO
OBJECTIVES: Clinical imaging data are essential for developing research software for computer-aided diagnosis, treatment planning and image-guided surgery, yet existing systems are poorly suited for data sharing between healthcare and academia: research systems rarely provide an integrated approach for data exchange with clinicians; hospital systems are focused towards clinical patient care with limited access for external researchers; and safe haven environments are not well suited to algorithm development. We have established GIFT-Cloud, a data and medical image sharing platform, to meet the needs of GIFT-Surg, an international research collaboration that is developing novel imaging methods for fetal surgery. GIFT-Cloud also has general applicability to other areas of imaging research. METHODS: GIFT-Cloud builds upon well-established cross-platform technologies. The Server provides secure anonymised data storage, direct web-based data access and a REST API for integrating external software. The Uploader provides automated on-site anonymisation, encryption and data upload. Gateways provide a seamless process for uploading medical data from clinical systems to the research server. RESULTS: GIFT-Cloud has been implemented in a multi-centre study for fetal medicine research. We present a case study of placental segmentation for pre-operative surgical planning, showing how GIFT-Cloud underpins the research and integrates with the clinical workflow. CONCLUSIONS: GIFT-Cloud simplifies the transfer of imaging data from clinical to research institutions, facilitating the development and validation of medical research software and the sharing of results back to the clinical partners. GIFT-Cloud supports collaboration between multiple healthcare and research institutions while satisfying the demands of patient confidentiality, data security and data ownership.
Assuntos
Computação em Nuvem , Comportamento Cooperativo , Diagnóstico por Imagem , Disseminação de Informação , Segurança Computacional , Administração Hospitalar , Universidades/organização & administraçãoRESUMO
Segmentation of the placenta from fetal MRI is challenging due to sparse acquisition, inter-slice motion, and the widely varying position and shape of the placenta between pregnant women. We propose a minimally interactive framework that combines multiple volumes acquired in different views to obtain accurate segmentation of the placenta. In the first phase, a minimally interactive slice-by-slice propagation method called Slic-Seg is used to obtain an initial segmentation from a single motion-corrupted sparse volume image. It combines high-level features, online Random Forests and Conditional Random Fields, and only needs user interactions in a single slice. In the second phase, to take advantage of the complementary resolution in multiple volumes acquired in different views, we further propose a probability-based 4D Graph Cuts method to refine the initial segmentations using inter-slice and inter-image consistency. We used our minimally interactive framework to examine the placentas of 16 mid-gestation patients from MRI acquired in axial and sagittal views respectively. The results show the proposed method has 1) a good performance even in cases where sparse scribbles provided by the user lead to poor results with the competitive propagation approaches; 2) a good interactivity with low intra- and inter-operator variability; 3) higher accuracy than state-of-the-art interactive segmentation methods; and 4) an improved accuracy due to the co-segmentation based refinement, which outperforms single volume or intensity-based Graph Cuts.
Assuntos
Algoritmos , Feto/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Placenta/diagnóstico por imagem , Feminino , Humanos , Gravidez , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
The computational detection of pulmonary lobes from CT images is a challenging segmentation problem with important respiratory health care applications, including surgical planning and regional image analysis. Several authors have proposed automated algorithms and we present a methodological review. These algorithms share a number of common stages and we consider each stage in turn, comparing the methods applied by each author and discussing their relative strengths. No standard method has yet emerged and none of the published methods have been demonstrated across a full range of clinical pathologies and imaging protocols. We discuss how improved methods could be developed by combining different approaches, and we use this to propose a workflow for the development of new algorithms.