Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Más filtros

Base de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
IEEE Trans Med Imaging ; 42(11): 3323-3335, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37276115

RESUMEN

This paper presents an effective and general data augmentation framework for medical image segmentation. We adopt a computationally efficient and data-efficient gradient-based meta-learning scheme to explicitly align the distribution of training and validation data which is used as a proxy for unseen test data. We improve the current data augmentation strategies with two core designs. First, we learn class-specific training-time data augmentation (TRA) effectively increasing the heterogeneity within the training subsets and tackling the class imbalance common in segmentation. Second, we jointly optimize TRA and test-time data augmentation (TEA), which are closely connected as both aim to align the training and test data distribution but were so far considered separately in previous works. We demonstrate the effectiveness of our method on four medical image segmentation tasks across different scenarios with two state-of-the-art segmentation models, DeepMedic and nnU-Net. Extensive experimentation shows that the proposed data augmentation framework can significantly and consistently improve the segmentation performance when compared to existing solutions. Code is publicly available at https://github.com/ZerojumpLine/JCSAugment.

2.
IEEE Trans Med Imaging ; 42(6): 1885-1896, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37022408

RESUMEN

Background samples provide key contextual information for segmenting regions of interest (ROIs). However, they always cover a diverse set of structures, causing difficulties for the segmentation model to learn good decision boundaries with high sensitivity and precision. The issue concerns the highly heterogeneous nature of the background class, resulting in multi-modal distributions. Empirically, we find that neural networks trained with heterogeneous background struggle to map the corresponding contextual samples to compact clusters in feature space. As a result, the distribution over background logit activations may shift across the decision boundary, leading to systematic over-segmentation across different datasets and tasks. In this study, we propose context label learning (CoLab) to improve the context representations by decomposing the background class into several subclasses. Specifically, we train an auxiliary network as a task generator, along with the primary segmentation model, to automatically generate context labels that positively affect the ROI segmentation accuracy. Extensive experiments are conducted on several challenging segmentation tasks and datasets. The results demonstrate that CoLab can guide the segmentation model to map the logits of background samples away from the decision boundary, resulting in significantly improved segmentation accuracy. Code is available at https://github.com/ZerojumpLine/CoLab.


Asunto(s)
Redes Neurales de la Computación , Semántica , Procesamiento de Imagen Asistido por Computador
3.
EBioMedicine ; 75: 103777, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34959133

RESUMEN

BACKGROUND: We aimed to understand the relationship between serum biomarker concentration and lesion type and volume found on computed tomography (CT) following all severities of TBI. METHODS: Concentrations of six serum biomarkers (GFAP, NFL, NSE, S100B, t-tau and UCH-L1) were measured in samples obtained <24 hours post-injury from 2869 patients with all severities of TBI, enrolled in the CENTER-TBI prospective cohort study (NCT02210221). Imaging phenotypes were defined as intraparenchymal haemorrhage (IPH), oedema, subdural haematoma (SDH), extradural haematoma (EDH), traumatic subarachnoid haemorrhage (tSAH), diffuse axonal injury (DAI), and intraventricular haemorrhage (IVH). Multivariable polynomial regression was performed to examine the association between biomarker levels and both distinct lesion types and lesion volumes. Hierarchical clustering was used to explore imaging phenotypes; and principal component analysis and k-means clustering of acute biomarker concentrations to explore patterns of biomarker clustering. FINDINGS: 2869 patient were included, 68% (n=1946) male with a median age of 49 years (range 2-96). All severities of TBI (mild, moderate and severe) were included for analysis with majority (n=1946, 68%) having a mild injury (GCS 13-15). Patients with severe diffuse injury (Marshall III/IV) showed significantly higher levels of all measured biomarkers, with the exception of NFL, than patients with focal mass lesions (Marshall grades V/VI). Patients with either DAI+IVH or SDH+IPH+tSAH, had significantly higher biomarker concentrations than patients with EDH. Higher biomarker concentrations were associated with greater volume of IPH (GFAP, S100B, t-tau;adj r2 range:0·48-0·49; p<0·05), oedema (GFAP, NFL, NSE, t-tau, UCH-L1;adj r2 range:0·44-0·44; p<0·01), IVH (S100B;adj r2 range:0.48-0.49; p<0.05), Unsupervised k-means biomarker clustering revealed two clusters explaining 83·9% of variance, with phenotyping characteristics related to clinical injury severity. INTERPRETATION: Interpretation: Biomarker concentration within 24 hours of TBI is primarily related to severity of injury and intracranial disease burden, rather than pathoanatomical type of injury. FUNDING: CENTER-TBI is funded by the European Union 7th Framework programme (EC grant 602150).


Asunto(s)
Lesiones Traumáticas del Encéfalo , Proteómica , Biomarcadores , Lesiones Traumáticas del Encéfalo/diagnóstico , Humanos , Masculino , Estudios Prospectivos , Tomografía Computarizada por Rayos X/métodos
4.
IEEE Trans Med Imaging ; 40(3): 1065-1077, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33351758

RESUMEN

Class imbalance poses a challenge for developing unbiased, accurate predictive models. In particular, in image segmentation neural networks may overfit to the foreground samples from small structures, which are often heavily under-represented in the training set, leading to poor generalization. In this study, we provide new insights on the problem of overfitting under class imbalance by inspecting the network behavior. We find empirically that when training with limited data and strong class imbalance, at test time the distribution of logit activations may shift across the decision boundary, while samples of the well-represented class seem unaffected. This bias leads to a systematic under-segmentation of small structures. This phenomenon is consistently observed for different databases, tasks and network architectures. To tackle this problem, we introduce new asymmetric variants of popular loss functions and regularization techniques including a large margin loss, focal loss, adversarial training, mixup and data augmentation, which are explicitly designed to counter logit shift of the under-represented classes. Extensive experiments are conducted on several challenging segmentation tasks. Our results demonstrate that the proposed modifications to the objective function can lead to significantly improved segmentation accuracy compared to baselines and alternative approaches.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Bases de Datos Factuales
5.
Lancet Digit Health ; 2(6): e314-e322, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-33328125

RESUMEN

BACKGROUND: CT is the most common imaging modality in traumatic brain injury (TBI). However, its conventional use requires expert clinical interpretation and does not provide detailed quantitative outputs, which may have prognostic importance. We aimed to use deep learning to reliably and efficiently quantify and detect different lesion types. METHODS: Patients were recruited between Dec 9, 2014, and Dec 17, 2017, in 60 centres across Europe. We trained and validated an initial convolutional neural network (CNN) on expert manual segmentations (dataset 1). This CNN was used to automatically segment a new dataset of scans, which we then corrected manually (dataset 2). From this dataset, we used a subset of scans to train a final CNN for multiclass, voxel-wise segmentation of lesion types. The performance of this CNN was evaluated on a test subset. Performance was measured for lesion volume quantification, lesion progression, and lesion detection and lesion volume classification. For lesion detection, external validation was done on an independent set of 500 patients from India. FINDINGS: 98 scans from one centre were included in dataset 1. Dataset 2 comprised 839 scans from 38 centres: 184 scans were used in the training subset and 655 in the test subset. Compared with manual reference, CNN-derived lesion volumes showed a mean difference of 0·86 mL (95% CI -5·23 to 6·94) for intraparenchymal haemorrhage, 1·83 mL (-12·01 to 15·66) for extra-axial haemorrhage, 2·09 mL (-9·38 to 13·56) for perilesional oedema, and 0·07 mL (-1·00 to 1·13) for intraventricular haemorrhage. INTERPRETATION: We show the ability of a CNN to separately segment, quantify, and detect multiclass haemorrhagic lesions and perilesional oedema. These volumetric lesion estimates allow clinically relevant quantification of lesion burden and progression, with potential applications for personalised treatment strategies and clinical research in TBI. FUNDING: European Union 7th Framework Programme, Hannelore Kohl Stiftung, OneMind, NeuroTrauma Sciences, Integra Neurosciences, European Research Council Horizon 2020.


Asunto(s)
Lesiones Traumáticas del Encéfalo/diagnóstico por imagen , Aprendizaje Profundo , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Encéfalo/diagnóstico por imagen , Niño , Europa (Continente) , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Semántica , Adulto Joven
6.
J Med Imaging (Bellingham) ; 7(5): 055501, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33102623

RESUMEN

Purpose: Deep learning (DL) algorithms have shown promising results for brain tumor segmentation in MRI. However, validation is required prior to routine clinical use. We report the first randomized and blinded comparison of DL and trained technician segmentations. Approach: We compiled a multi-institutional database of 741 pretreatment MRI exams. Each contained a postcontrast T1-weighted exam, a T2-weighted fluid-attenuated inversion recovery exam, and at least one technician-derived tumor segmentation. The database included 729 unique patients (470 males and 259 females). Of these exams, 641 were used for training the DL system, and 100 were reserved for testing. We developed a platform to enable qualitative, blinded, controlled assessment of lesion segmentations made by technicians and the DL method. On this platform, 20 neuroradiologists performed 400 side-by-side comparisons of segmentations on 100 test cases. They scored each segmentation between 0 (poor) and 10 (perfect). Agreement between segmentations from technicians and the DL method was also evaluated quantitatively using the Dice coefficient, which produces values between 0 (no overlap) and 1 (perfect overlap). Results: The neuroradiologists gave technician and DL segmentations mean scores of 6.97 and 7.31, respectively ( p < 0.00007 ). The DL method achieved a mean Dice coefficient of 0.87 on the test cases. Conclusions: This was the first objective comparison of automated and human segmentation using a blinded controlled assessment study. Our DL system learned to outperform its "human teachers" and produced output that was better, on average, than its training data.

7.
J Neurotrauma ; 37(13): 1556-1565, 2020 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-31928143

RESUMEN

Failure of cerebral autoregulation has been linked to unfavorable outcome after traumatic brain injury (TBI). Preliminary evidence from a small, retrospective, single-center analysis suggests that autoregulatory dysfunction may be associated with traumatic lesion expansion, particularly for pericontusional edema. The goal of this study was to further explore these associations using prospective, multi-center data from the Collaborative European Neurotrauma Effectiveness Research in TBI (CENTER-TBI) and to further explore the relationship between autoregulatory failure, lesion progression, and patient outcome. A total of 88 subjects from the CENTER-TBI High Resolution ICU Sub-Study cohort were included. All patients had an admission computed tomography (CT) scan and early repeat scan available, as well as high-frequency neurophysiological recordings covering the between-scan interval. Using a novel, semiautomated approach at lesion segmentation, we calculated absolute changes in volume of contusion core, pericontusional edema, and extra-axial hemorrhage between the imaging studies. We then evaluated associations between cerebrovascular reactivity metrics and radiological lesion progression using mixed-model regression. Analyses were adjusted for baseline covariates and non-neurophysiological factors associated with lesion growth using multi-variate methods. Impairment in cerebrovascular reactivity was significantly associated with progression of pericontusional edema and, to a lesser degree, intraparenchymal hemorrhage. In contrast, there were no significant associations with extra-axial hemorrhage. The strongest relationships were observed between RAC-based metrics and edema formation. Pulse amplitude index showed weaker, but consistent, associations with contusion growth. Cerebrovascular reactivity metrics remained strongly associated with lesion progression after taking into account contributions from non-neurophysiological factors and mean cerebral perfusion pressure. Total hemorrhagic core and edema volumes on repeat CT were significantly larger in patients who were deceased at 6 months, and the amount of edema was greater in patients with an unfavourable outcome (Glasgow Outcome Scale-Extended 1-4). Our study suggests associations between autoregulatory failure, traumatic edema progression, and poor outcome. This is in keeping with findings from a single-center retrospective analysis, providing multi-center prospective data to support those results.


Asunto(s)
Lesiones Traumáticas del Encéfalo/diagnóstico por imagen , Lesiones Traumáticas del Encéfalo/fisiopatología , Circulación Cerebrovascular/fisiología , Progresión de la Enfermedad , Unidades de Cuidados Intensivos , Colaboración Intersectorial , Adulto , Lesiones Encefálicas/diagnóstico por imagen , Lesiones Encefálicas/epidemiología , Lesiones Encefálicas/fisiopatología , Lesiones Traumáticas del Encéfalo/epidemiología , Estudios de Cohortes , Europa (Continente)/epidemiología , Femenino , Humanos , Unidades de Cuidados Intensivos/normas , Masculino , Persona de Mediana Edad , Tomografía Computarizada por Rayos X/métodos , Tomografía Computarizada por Rayos X/normas , Resultado del Tratamiento
8.
IEEE Trans Med Imaging ; 39(6): 2088-2099, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-31944949

RESUMEN

Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer's disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging.


Asunto(s)
Enfermedad de Alzheimer , Imagen por Resonancia Magnética , Enfermedad de Alzheimer/diagnóstico por imagen , Hipocampo , Humanos
9.
Stroke ; 50(7): 1734-1741, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-31177973

RESUMEN

Background and Purpose- We evaluated deep learning algorithms' segmentation of acute ischemic lesions on heterogeneous multi-center clinical diffusion-weighted magnetic resonance imaging (MRI) data sets and explored the potential role of this tool for phenotyping acute ischemic stroke. Methods- Ischemic stroke data sets from the MRI-GENIE (MRI-Genetics Interface Exploration) repository consisting of 12 international genetic research centers were retrospectively analyzed using an automated deep learning segmentation algorithm consisting of an ensemble of 3-dimensional convolutional neural networks. Three ensembles were trained using data from the following: (1) 267 patients from an independent single-center cohort, (2) 267 patients from MRI-GENIE, and (3) mixture of (1) and (2). The algorithms' performances were compared against manual outlines from a separate 383 patient subset from MRI-GENIE. Univariable and multivariable logistic regression with respect to demographics, stroke subtypes, and vascular risk factors were performed to identify phenotypes associated with large acute diffusion-weighted MRI volumes and greater stroke severity in 2770 MRI-GENIE patients. Stroke topography was investigated. Results- The ensemble consisting of a mixture of MRI-GENIE and single-center convolutional neural networks performed best. Subset analysis comparing automated and manual lesion volumes in 383 patients found excellent correlation (ρ=0.92; P<0.0001). Median (interquartile range) diffusion-weighted MRI lesion volumes from 2770 patients were 3.7 cm3 (0.9-16.6 cm3). Patients with small artery occlusion stroke subtype had smaller lesion volumes ( P<0.0001) and different topography compared with other stroke subtypes. Conclusions- Automated accurate clinical diffusion-weighted MRI lesion segmentation using deep learning algorithms trained with multi-center and diverse data is feasible. Both lesion volume and topography can provide insight into stroke subtypes with sufficient sample size from big heterogeneous multi-center clinical imaging phenotype data sets.


Asunto(s)
Isquemia Encefálica/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética/métodos , Accidente Cerebrovascular/diagnóstico por imagen , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Macrodatos , Isquemia Encefálica/epidemiología , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Masculino , Persona de Mediana Edad , Redes Neurales de la Computación , Variaciones Dependientes del Observador , Fenotipo , Estudios Retrospectivos , Factores de Riesgo , Factores Socioeconómicos , Accidente Cerebrovascular/epidemiología
10.
Med Image Anal ; 53: 156-164, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30784956

RESUMEN

Automatic detection of anatomical landmarks is an important step for a wide range of applications in medical image analysis. Manual annotation of landmarks is a tedious task and prone to observer errors. In this paper, we evaluate novel deep reinforcement learning (RL) strategies to train agents that can precisely and robustly localize target landmarks in medical scans. An artificial RL agent learns to identify the optimal path to the landmark by interacting with an environment, in our case 3D images. Furthermore, we investigate the use of fixed- and multi-scale search strategies with novel hierarchical action steps in a coarse-to-fine manner. Several deep Q-network (DQN) architectures are evaluated for detecting multiple landmarks using three different medical imaging datasets: fetal head ultrasound (US), adult brain and cardiac magnetic resonance imaging (MRI). The performance of our agents surpasses state-of-the-art supervised and RL methods. Our experiments also show that multi-scale search strategies perform significantly better than fixed-scale agents in images with large field of view and noisy background such as in cardiac MRI. Moreover, the novel hierarchical steps can significantly speed up the searching process by a factor of 4-5 times.


Asunto(s)
Puntos Anatómicos de Referencia , Encéfalo/diagnóstico por imagen , Aprendizaje Profundo , Cabeza/diagnóstico por imagen , Corazón/diagnóstico por imagen , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Adulto , Femenino , Cabeza/embriología , Humanos , Embarazo
11.
IEEE Trans Med Imaging ; 37(2): 384-395, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-28961105

RESUMEN

Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning-based techniques. However, in most recent and promising techniques such as CNN-based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac data sets and public benchmarks. In addition, we demonstrate how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.


Asunto(s)
Técnicas de Imagen Cardíaca/métodos , Imagenología Tridimensional/métodos , Redes Neurales de la Computación , Algoritmos , Cardiomiopatías/diagnóstico por imagen , Bases de Datos Factuales , Corazón/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética
12.
PLoS One ; 12(11): e0188152, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29182625

RESUMEN

Traumatic brain injury (TBI) is caused by a sudden external force and can be very heterogeneous in its manifestation. In this work, we analyse T1-weighted magnetic resonance (MR) brain images that were prospectively acquired from patients who sustained mild to severe TBI. We investigate the potential of a recently proposed automatic segmentation method to support the outcome prediction of TBI. Specifically, we extract meaningful cross-sectional and longitudinal measurements from acute- and chronic-phase MR images. We calculate regional volume and asymmetry features at the acute/subacute stage of the injury (median: 19 days after injury), to predict the disability outcome of 67 patients at the chronic disease stage (median: 229 days after injury). Our results indicate that small structural volumes in the acute stage (e.g. of the hippocampus, accumbens, amygdala) can be strong predictors for unfavourable disease outcome. Further, group differences in atrophy are investigated. We find that patients with unfavourable outcome show increased atrophy. Among patients with severe disability outcome we observed a significantly higher mean reduction of cerebral white matter (3.1%) as compared to patients with low disability outcome (0.7%).


Asunto(s)
Lesiones Traumáticas del Encéfalo/diagnóstico por imagen , Lesiones Traumáticas del Encéfalo/patología , Imagen por Resonancia Magnética/métodos , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Estudios Transversales , Femenino , Humanos , Estudios Longitudinales , Masculino , Persona de Mediana Edad , Adulto Joven
13.
Med Phys ; 44(10): 5210-5220, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-28756622

RESUMEN

PURPOSE: As part of a program to implement automatic lesion detection methods for whole body magnetic resonance imaging (MRI) in oncology, we have developed, evaluated, and compared three algorithms for fully automatic, multiorgan segmentation in healthy volunteers. METHODS: The first algorithm is based on classification forests (CFs), the second is based on 3D convolutional neural networks (CNNs) and the third algorithm is based on a multi-atlas (MA) approach. We examined data from 51 healthy volunteers, scanned prospectively with a standardized, multiparametric whole body MRI protocol at 1.5 T. The study was approved by the local ethics committee and written consent was obtained from the participants. MRI data were used as input data to the algorithms, while training was based on manual annotation of the anatomies of interest by clinical MRI experts. Fivefold cross-validation experiments were run on 34 artifact-free subjects. We report three overlap and three surface distance metrics to evaluate the agreement between the automatic and manual segmentations, namely the dice similarity coefficient (DSC), recall (RE), precision (PR), average surface distance (ASD), root-mean-square surface distance (RMSSD), and Hausdorff distance (HD). Analysis of variances was used to compare pooled label metrics between the three algorithms and the DSC on a 'per-organ' basis. A Mann-Whitney U test was used to compare the pooled metrics between CFs and CNNs and the DSC on a 'per-organ' basis, when using different imaging combinations as input for training. RESULTS: All three algorithms resulted in robust segmenters that were effectively trained using a relatively small number of datasets, an important consideration in the clinical setting. Mean overlap metrics for all the segmented structures were: CFs: DSC = 0.70 ± 0.18, RE = 0.73 ± 0.18, PR = 0.71 ± 0.14, CNNs: DSC = 0.81 ± 0.13, RE = 0.83 ± 0.14, PR = 0.82 ± 0.10, MA: DSC = 0.71 ± 0.22, RE = 0.70 ± 0.34, PR = 0.77 ± 0.15. Mean surface distance metrics for all the segmented structures were: CFs: ASD = 13.5 ± 11.3 mm, RMSSD = 34.6 ± 37.6 mm and HD = 185.7 ± 194.0 mm, CNNs; ASD = 5.48 ± 4.84 mm, RMSSD = 17.0 ± 13.3 mm and HD = 199.0 ± 101.2 mm, MA: ASD = 4.22 ± 2.42 mm, RMSSD = 6.13 ± 2.55 mm, and HD = 38.9 ± 28.9 mm. The pooled performance of CFs improved when all imaging combinations (T2w + T1w + DWI) were used as input, while the performance of CNNs deteriorated, but in neither case, significantly. CNNs with T2w images as input, performed significantly better than CFs with all imaging combinations as input for all anatomical labels, except for the bladder. CONCLUSIONS: Three state-of-the-art algorithms were developed and used to automatically segment major organs and bones in whole body MRI; good agreement to manual segmentations performed by clinical MRI experts was observed. CNNs perform favorably, when using T2w volumes as input. Using multimodal MRI data as input to CNNs did not improve the segmentation performance.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Imagen de Cuerpo Entero , Adulto , Anciano , Automatización , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
14.
IEEE Trans Med Imaging ; 36(11): 2204-2215, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-28708546

RESUMEN

Identifying and interpreting fetal standard scan planes during 2-D ultrasound mid-pregnancy examinations are highly complex tasks, which require years of training. Apart from guiding the probe to the correct location, it can be equally difficult for a non-expert to identify relevant structures within the image. Automatic image processing can provide tools to help experienced as well as inexperienced operators with these tasks. In this paper, we propose a novel method based on convolutional neural networks, which can automatically detect 13 fetal standard views in freehand 2-D ultrasound data as well as provide a localization of the fetal structures via a bounding box. An important contribution is that the network learns to localize the target anatomy using weak supervision based on image-level labels only. The network architecture is designed to operate in real-time while providing optimal output for the localization task. We present results for real-time annotation, retrospective frame retrieval from saved videos, and localization on a very large and challenging dataset consisting of images and video recordings of full clinical anomaly screenings. We found that the proposed method achieved an average F1-score of 0.798 in a realistic classification experiment modeling real-time detection, and obtained a 90.09% accuracy for retrospective frame retrieval. Moreover, an accuracy of 77.8% was achieved on the localization task.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Ultrasonografía Prenatal/métodos , Algoritmos , Femenino , Humanos , Embarazo , Grabación en Video
15.
IEEE Trans Med Imaging ; 36(8): 1597-1606, 2017 08.
Artículo en Inglés | MEDLINE | ID: mdl-28436849

RESUMEN

When integrating computational tools, such as automatic segmentation, into clinical practice, it is of utmost importance to be able to assess the level of accuracy on new data and, in particular, to detect when an automatic method fails. However, this is difficult to achieve due to the absence of ground truth. Segmentation accuracy on clinical data might be different from what is found through cross validation, because validation data are often used during incremental method development, which can lead to overfitting and unrealistic performance expectations. Before deployment, performance is quantified using different metrics, for which the predicted segmentation is compared with a reference segmentation, often obtained manually by an expert. But little is known about the real performance after deployment when a reference is unavailable. In this paper, we introduce the concept of reverse classification accuracy (RCA) as a framework for predicting the performance of a segmentation method on new data. In RCA, we take the predicted segmentation from a new image to train a reverse classifier, which is evaluated on a set of reference images with available ground truth. The hypothesis is that if the predicted segmentation is of good quality, then the reverse classifier will perform well on at least some of the reference images. We validate our approach on multi-organ segmentation with different classifiers and segmentation methods. Our results indicate that it is indeed possible to predict the quality of individual segmentations, in the absence of ground truth. Thus, RCA is ideal for integration into automatic processing pipelines in clinical routine and as a part of large-scale image analysis studies.


Asunto(s)
Reproducibilidad de los Resultados , Algoritmos
16.
PLoS One ; 12(3): e0173900, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28350816

RESUMEN

Retinoblastoma and uveal melanoma are fast spreading eye tumors usually diagnosed by using 2D Fundus Image Photography (Fundus) and 2D Ultrasound (US). Diagnosis and treatment planning of such diseases often require additional complementary imaging to confirm the tumor extend via 3D Magnetic Resonance Imaging (MRI). In this context, having automatic segmentations to estimate the size and the distribution of the pathological tissue would be advantageous towards tumor characterization. Until now, the alternative has been the manual delineation of eye structures, a rather time consuming and error-prone task, to be conducted in multiple MRI sequences simultaneously. This situation, and the lack of tools for accurate eye MRI analysis, reduces the interest in MRI beyond the qualitative evaluation of the optic nerve invasion and the confirmation of recurrent malignancies below calcified tumors. In this manuscript, we propose a new framework for the automatic segmentation of eye structures and ocular tumors in multi-sequence MRI. Our key contribution is the introduction of a pathological eye model from which Eye Patient-Specific Features (EPSF) can be computed. These features combine intensity and shape information of pathological tissue while embedded in healthy structures of the eye. We assess our work on a dataset of pathological patient eyes by computing the Dice Similarity Coefficient (DSC) of the sclera, the cornea, the vitreous humor, the lens and the tumor. In addition, we quantitatively show the superior performance of our pathological eye model as compared to the segmentation obtained by using a healthy model (over 4% DSC) and demonstrate the relevance of our EPSF, which improve the final segmentation regardless of the classifier employed.


Asunto(s)
Neoplasias del Ojo/diagnóstico por imagen , Ojo/diagnóstico por imagen , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Córnea/anatomía & histología , Córnea/diagnóstico por imagen , Ojo/anatomía & histología , Neoplasias del Ojo/patología , Humanos , Cristalino/diagnóstico por imagen , Modelos Anatómicos , Esclerótica/anatomía & histología , Esclerótica/diagnóstico por imagen , Cuerpo Vítreo/anatomía & histología , Cuerpo Vítreo/diagnóstico por imagen
17.
IEEE Trans Med Imaging ; 36(2): 674-683, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-27845654

RESUMEN

In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut [1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naïve approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy.


Asunto(s)
Redes Neurales de la Computación , Algoritmos , Encéfalo , Humanos , Aumento de la Imagen , Interpretación de Imagen Asistida por Computador , Aprendizaje Automático , Imagen por Resonancia Magnética , Método de Montecarlo
18.
Med Image Anal ; 36: 61-78, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-27865153

RESUMEN

We propose a dual pathway, 11-layers deep, three-dimensional Convolutional Neural Network for the challenging task of brain lesion segmentation. The devised architecture is the result of an in-depth analysis of the limitations of current networks proposed for similar applications. To overcome the computational burden of processing 3D medical scans, we have devised an efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data. Further, we analyze the development of deeper, thus more discriminative 3D CNNs. In order to incorporate both local and larger contextual information, we employ a dual pathway architecture that processes the input images at multiple scales simultaneously. For post-processing of the network's soft segmentation, we use a 3D fully connected Conditional Random Field which effectively removes false positives. Our pipeline is extensively evaluated on three challenging tasks of lesion segmentation in multi-channel MRI patient data with traumatic brain injuries, brain tumours, and ischemic stroke. We improve on the state-of-the-art for all three applications, with top ranking performance on the public benchmarks BRATS 2015 and ISLES 2015. Our method is computationally efficient, which allows its adoption in a variety of research and clinical settings. The source code of our implementation is made publicly available.


Asunto(s)
Lesiones Traumáticas del Encéfalo/diagnóstico por imagen , Isquemia Encefálica/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Redes Neurales de la Computación , Lesiones Traumáticas del Encéfalo/patología , Isquemia Encefálica/patología , Neoplasias Encefálicas/patología , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
19.
Med Image Anal ; 35: 250-269, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-27475911

RESUMEN

Ischemic stroke is the most common cerebrovascular disease, and its diagnosis, treatment, and study relies on non-invasive imaging. Algorithms for stroke lesion segmentation from magnetic resonance imaging (MRI) volumes are intensely researched, but the reported results are largely incomparable due to different datasets and evaluation schemes. We approached this urgent problem of comparability with the Ischemic Stroke Lesion Segmentation (ISLES) challenge organized in conjunction with the MICCAI 2015 conference. In this paper we propose a common evaluation framework, describe the publicly available datasets, and present the results of the two sub-challenges: Sub-Acute Stroke Lesion Segmentation (SISS) and Stroke Perfusion Estimation (SPES). A total of 16 research groups participated with a wide range of state-of-the-art automatic segmentation algorithms. A thorough analysis of the obtained data enables a critical evaluation of the current state-of-the-art, recommendations for further developments, and the identification of remaining challenges. The segmentation of acute perfusion lesions addressed in SPES was found to be feasible. However, algorithms applied to sub-acute lesion segmentation in SISS still lack accuracy. Overall, no algorithmic characteristic of any method was found to perform superior to the others. Instead, the characteristics of stroke lesion appearances, their evolution, and the observed challenges should be studied in detail. The annotated ISLES image datasets continue to be publicly available through an online evaluation system to serve as an ongoing benchmarking resource (www.isles-challenge.org).


Asunto(s)
Algoritmos , Benchmarking , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Accidente Cerebrovascular/diagnóstico por imagen , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA