Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 63
Filtrar
1.
J Magn Reson Imaging ; 2023 Oct 06.
Artigo em Inglês | MEDLINE | ID: mdl-37803817

RESUMO

BACKGROUND: The combination of anatomical MRI and deep learning-based methods such as convolutional neural networks (CNNs) is a promising strategy to build predictive models of multiple sclerosis (MS) prognosis. However, studies assessing the effect of different input strategies on model's performance are lacking. PURPOSE: To compare whole-brain input sampling strategies and regional/specific-tissue strategies, which focus on a priori known relevant areas for disability accrual, to stratify MS patients based on their disability level. STUDY TYPE: Retrospective. SUBJECTS: Three hundred nineteen MS patients (382 brain MRI scans) with clinical assessment of disability level performed within the following 6 months (~70% training/~15% validation/~15% inference in-house dataset) and 440 MS patients from multiple centers (independent external validation cohort). FIELD STRENGTH/SEQUENCE: Single vendor 1.5 T or 3.0 T. Magnetization-Prepared Rapid Gradient-Echo and Fluid-Attenuated Inversion Recovery sequences. ASSESSMENT: A 7-fold patient cross validation strategy was used to train a 3D-CNN to classify patients into two groups, Expanded Disability Status Scale score (EDSS) ≥ 3.0 or EDSS < 3.0. Two strategies were investigated: 1) a global approach, taking the whole brain volume as input and 2) regional approaches using five different regions-of-interest: white matter, gray matter, subcortical gray matter, ventricles, and brainstem structures. The performance of the models was assessed in the in-house and the independent external cohorts. STATISTICAL TESTS: Balanced accuracy, sensitivity, specificity, area under receiver operating characteristic (ROC) curve (AUC). RESULTS: With the in-house dataset, the gray matter regional model showed the highest stratification accuracy (81%), followed by the global approach (79%). In the external dataset, without any further retraining, an accuracy of 72% was achieved for the white matter model and 71% for the global approach. DATA CONCLUSION: The global approach offered the best trade-off between internal performance and external validation to stratify MS patients based on accumulated disability. EVIDENCE LEVEL: 4 TECHNICAL EFFICACY: Stage 2.

2.
Mult Scler ; 28(8): 1209-1218, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-34859704

RESUMO

BACKGROUND: Active (new/enlarging) T2 lesion counts are routinely used in the clinical management of multiple sclerosis. Thus, automated tools able to accurately identify active T2 lesions would be of high interest to neuroradiologists for assisting in their clinical activity. OBJECTIVE: To compare the accuracy in detecting active T2 lesions and of radiologically active patients based on different visual and automated methods. METHODS: One hundred multiple sclerosis patients underwent two magnetic resonance imaging examinations within 12 months. Four approaches were assessed for detecting active T2 lesions: (1) conventional neuroradiological reports; (2) prospective visual analyses performed by an expert; (3) automated unsupervised tool; and (4) supervised convolutional neural network. As a gold standard, a reference outcome was created by the consensus of two observers. RESULTS: The automated methods detected a higher number of active T2 lesions, and a higher number of active patients, but a higher number of false-positive active patients than visual methods. The convolutional neural network model was more sensitive in detecting active T2 lesions and active patients than the other automated method. CONCLUSION: Automated convolutional neural network models show potential as an aid to neuroradiological assessment in clinical practice, although visual supervision of the outcomes is still required.


Assuntos
Esclerose Múltipla , Humanos , Imageamento por Ressonância Magnética/métodos , Esclerose Múltipla/patologia , Estudos Prospectivos
3.
J Magn Reson Imaging ; 2021 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-34137113

RESUMO

BACKGROUND: Manual brain extraction from magnetic resonance (MR) images is time-consuming and prone to intra- and inter-rater variability. Several automated approaches have been developed to alleviate these constraints, including deep learning pipelines. However, these methods tend to reduce their performance in unseen magnetic resonance imaging (MRI) scanner vendors and different imaging protocols. PURPOSE: To present and evaluate for clinical use PARIETAL, a pre-trained deep learning brain extraction method. We compare its reproducibility in a scan/rescan analysis and its robustness among scanners of different manufacturers. STUDY TYPE: Retrospective. POPULATION: Twenty-one subjects (12 women) with age range 22-48 years acquired using three different MRI scanner machines including scan/rescan in each of them. FIELD STRENGTH/SEQUENCE: T1-weighted images acquired in a 3-T Siemens with magnetization prepared rapid gradient-echo sequence and two 1.5 T scanners, Philips and GE, with spin-echo and spoiled gradient-recalled (SPGR) sequences, respectively. ASSESSMENT: Analysis of the intracranial cavity volumes obtained for each subject on the three different scanners and the scan/rescan acquisitions. STATISTICAL TESTS: Parametric permutation tests of the differences in volumes to rank and statistically evaluate the performance of PARIETAL compared to state-of-the-art methods. RESULTS: The mean absolute intracranial volume differences obtained by PARIETAL in the scan/rescan analysis were 1.88 mL, 3.91 mL, and 4.71 mL for Siemens, GE, and Philips scanners, respectively. PARIETAL was the best-ranked method on Siemens and GE scanners, while decreasing to Rank 2 on the Philips images. Intracranial differences for the same subject between scanners were 5.46 mL, 27.16 mL, and 30.44 mL for GE/Philips, Siemens/Philips, and Siemens/GE comparison, respectively. The permutation tests revealed that PARIETAL was always in Rank 1, obtaining the most similar volumetric results between scanners. DATA CONCLUSION: PARIETAL accurately segments the brain and it generalizes to images acquired at different sites without the need of training or fine-tuning it again. PARIETAL is publicly available. LEVEL OF EVIDENCE: 2 TECHNICAL EFFICACY STAGE: 2.

4.
Neuroimage ; 155: 159-168, 2017 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-28435096

RESUMO

In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating (r≥0.97) also with the expected lesion volume.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Esclerose Múltipla/diagnóstico por imagem , Redes Neurais de Computação , Neuroimagem/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Humanos , Esclerose Múltipla/patologia , Substância Branca/diagnóstico por imagem , Substância Branca/patologia
5.
J Magn Reson Imaging ; 41(1): 93-101, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24459099

RESUMO

PURPOSE: Ground-truth annotations from the well-known Internet Brain Segmentation Repository (IBSR) datasets consider Sulcal cerebrospinal fluid (SCSF) voxels as gray matter. This can lead to bias when evaluating the performance of tissue segmentation methods. In this work we compare the accuracy of 10 brain tissue segmentation methods analyzing the effects of SCSF ground-truth voxels on accuracy estimations. MATERIALS AND METHODS: The set of methods is composed by FAST, SPM5, SPM8, GAMIXTURE, ANN, FCM, KNN, SVPASEG, FANTASM, and PVC. Methods are evaluated using original IBSR ground-truth and ranked by means of their performance on pairwise comparisons using permutation tests. Afterward, the evaluation is repeated using IBSR ground-truth without considering SCSF. RESULTS: The Dice coefficient of all methods is affected by changes in SCSF annotations, especially on SPM5, SPM8 and FAST. When not considering SCSF voxels, SVPASEG (0.90 ± 0.01) and SPM8 (0.91 ± 0.01) are the methods from our study that appear more suitable for gray matter tissue segmentation, while FAST (0.89 ± 0.02) is the best tool for segmenting white matter tissue. CONCLUSION: The performance and the accuracy of methods on IBSR images vary notably when not considering SCSF voxels. The fact that three of the most common methods (FAST, SPM5, and SPM8) report an important change in their accuracy suggest to consider these differences in labeling for new comparative studies.


Assuntos
Algoritmos , Encéfalo/anatomia & histologia , Conjuntos de Dados como Assunto/estatística & dados numéricos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Mapeamento Encefálico/métodos , Humanos , Reprodutibilidade dos Testes
6.
Neuroradiology ; 57(10): 1031-43, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26227167

RESUMO

INTRODUCTION: Lesion segmentation plays an important role in the diagnosis and follow-up of multiple sclerosis (MS). This task is very time-consuming and subject to intra- and inter-rater variability. In this paper, we present a new tool for automated MS lesion segmentation using T1w and fluid-attenuated inversion recovery (FLAIR) images. METHODS: Our approach is based on two main steps, initial brain tissue segmentation according to the gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) performed in T1w images, followed by a second step where the lesions are segmented as outliers to the normal apparent GM brain tissue on the FLAIR image. RESULTS: The tool has been validated using data from more than 100 MS patients acquired with different scanners and at different magnetic field strengths. Quantitative evaluation provided a better performance in terms of precision while maintaining similar results on sensitivity and Dice similarity measures compared with those of other approaches. CONCLUSION: Our tool is implemented as a publicly available SPM8/12 extension that can be used by both the medical and research communities.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Esclerose Múltipla/patologia , Reconhecimento Automatizado de Padrão/métodos , Software , Humanos , Aumento da Imagem/métodos , Aprendizado de Máquina , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Validação de Programas de Computador
7.
J Digit Imaging ; 28(5): 604-12, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25720749

RESUMO

Breast density is a strong risk factor for breast cancer. In this paper, we present an automated approach for breast density segmentation in mammographic images based on a supervised pixel-based classification and using textural and morphological features. The objective of the paper is not only to show the feasibility of an automatic algorithm for breast density segmentation but also to prove its potential application to the study of breast density evolution in longitudinal studies. The database used here contains three complete screening examinations, acquired 2 years apart, of 130 different patients. The approach was validated by comparing manual expert annotations with automatically obtained estimations. Transversal analysis of the breast density analysis of craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts acquired in the same study showed a correlation coefficient of ρ = 0.96 between the mammographic density percentage for left and right breasts, whereas a comparison of both mammographic views showed a correlation of ρ = 0.95. A longitudinal study of breast density confirmed the trend that dense tissue percentage decreases over time, although we noticed that the decrease in the ratio depends on the initial amount of breast density.


Assuntos
Algoritmos , Neoplasias da Mama/diagnóstico por imagem , Mama/patologia , Processamento de Imagem Assistida por Computador , Glândulas Mamárias Humanas/anormalidades , Interpretação de Imagem Radiográfica Assistida por Computador , Idoso , Densidade da Mama , Neoplasias da Mama/patologia , Estudos de Viabilidade , Feminino , Humanos , Estudos Longitudinais , Pessoa de Meia-Idade , Reprodutibilidade dos Testes
8.
Neuroradiology ; 56(5): 363-74, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24590302

RESUMO

INTRODUCTION: Time-series analysis of magnetic resonance images (MRI) is of great value for multiple sclerosis (MS) diagnosis and follow-up. In this paper, we present an unsupervised subtraction approach which incorporates multisequence information to deal with the detection of new MS lesions in longitudinal studies. METHODS: The proposed pipeline for detecting new lesions consists of the following steps: skull stripping, bias field correction, histogram matching, registration, white matter masking, image subtraction, automated thresholding, and postprocessing. We also combine the results of PD-w and T2-w images to reduce false positive detections. RESULTS: Experimental tests are performed in 20 MS patients with two temporal studies separated 12 (12M) or 48 (48M) months in time. The pipeline achieves very good performance obtaining an overall sensitivity of 0.83 and 0.77 with a false discovery rate (FDR) of 0.14 and 0.18 for the 12M and 48M datasets, respectively. The most difficult situation for the pipeline is the detection of very small lesions where the obtained sensitivity is lower and the FDR higher. CONCLUSION: Our fully automated approach is robust and accurate, allowing detection of new appearing MS lesions. We believe that the pipeline can be applied to large collections of images and also be easily adapted to monitor other brain pathologies.


Assuntos
Interpretação de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos , Esclerose Múltipla/diagnóstico , Humanos , Estudos Longitudinais
9.
Comput Biol Med ; 179: 108811, 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38991315

RESUMO

Brain atrophy measurements derived from magnetic resonance imaging (MRI) are a promising marker for the diagnosis and prognosis of neurodegenerative pathologies such as Alzheimer's disease or multiple sclerosis. However, its use in individualized assessments is currently discouraged due to a series of technical and biological issues. In this work, we present a deep learning pipeline for segmentation-based brain atrophy quantification that improves upon the automated labels of the reference method from which it learns. This goal is achieved through tissue similarity regularization that exploits the a priori knowledge that scans from the same subject made within a short interval must have similar tissue volumes. To train the presented pipeline, we use unlabeled pairs of T1-weighted MRI scans having a tissue similarity prior, and generate the target brain tissue segmentations in a fully automated manner using the fsl_anat pipeline implemented in the FMRIB Software Library (FSL). Tissue similarity regularization is enforced during training through a weighted loss term that penalizes tissue volume differences between short-interval scan pairs from the same subject. In inference, the pipeline performs end-to-end skull stripping and brain tissue segmentation from a single T1-weighted MRI scan in its native space, i.e., without performing image interpolation. For longitudinal evaluation, each image is independently segmented first, and then measures of change are computed. We evaluate the presented pipeline in two different MRI datasets, MIRIAD and ADNI1, which have longitudinal and short-interval imaging from healthy controls (HC) and Alzheimer's disease (AD) subjects. In short-interval scan pairs, tissue similarity regularization reduces the quantification error and improves the consistency of measured tissue volumes. In the longitudinal case, the proposed pipeline shows reduced variability of atrophy measures and higher effect sizes of differences in annualized rates between HC and AD subjects. Our pipeline obtains a Cohen's d effect size of d=2.07 on the MIRIAD dataset, an increase from the reference pipeline used to train it (d=1.01), and higher than that of SIENA (d=1.73), a well-known state-of-the-art approach. In the ADNI1 dataset, the proposed pipeline improves its effect size (d=1.37) with respect to the reference pipeline (d=0.80) and surpasses SIENA (d=1.33). The proposed data-driven deep learning regularization reduces the biases and systematic errors learned from the reference segmentation method, which is used to generate the training targets. Improving the accuracy and reliability of atrophy quantification methods is essential to unlock brain atrophy as a diagnostic and prognostic marker in neurodegenerative pathologies.

10.
iScience ; 27(2): 108881, 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38318348

RESUMO

Automated tools to detect large vessel occlusion (LVO) in acute ischemic stroke patients using brain computed tomography angiography (CTA) have been shown to reduce the time for treatment, leading to better clinical outcomes. There is a lot of information in a single CTA and deep learning models do not have an obvious way of being conditioned on areas most relevant for LVO detection, i.e., the vasculature structure. In this work, we compare and contrast strategies to make convolutional neural networks focus on the vasculature without discarding context information of the brain parenchyma and propose an attention-inspired strategy to encourage this. We use brain CTAs from which we obtain 3D vasculature images. Then, we compare ways of combining the vasculature and the CTA images using a general-purpose network trained to detect LVO. The results show that the proposed strategies allow to improve LVO detection and could potentially help to learn other cerebrovascular-related tasks.

11.
Comput Med Imaging Graph ; 103: 102157, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36535217

RESUMO

Automated methods for segmentation-based brain volumetry may be confounded by the presence of white matter (WM) lesions, which introduce abnormal intensities that can alter the classification of not only neighboring but also distant brain tissue. These lesions are common in pathologies where brain volumetry is also an important prognostic marker, such as in multiple sclerosis (MS), and thus reducing their effects is critical for improving volumetric accuracy and reliability. In this work, we analyze the effect of WM lesions on deep learning based brain tissue segmentation methods for brain volumetry and introduce techniques to reduce the error these lesions produce on the measured volumes. We propose a 3D patch-based deep learning framework for brain tissue segmentation which is trained on the outputs of a reference classical method. To deal more robustly with pathological cases having WM lesions, we use a combination of small patches and a percentile-based input normalization. To minimize the effect of WM lesions, we also propose a multi-task double U-Net architecture performing end-to-end inpainting and segmentation, along with a training data generation procedure. In the evaluation, we first analyze the error introduced by artificial WM lesions on our framework as well as in the reference segmentation method without the use of lesion inpainting techniques. To the best of our knowledge, this is the first analysis of WM lesion effect on a deep learning based tissue segmentation approach for brain volumetry. The proposed framework shows a significantly smaller and more localized error introduced by WM lesions than the reference segmentation method, that displays much larger global differences. We also evaluated the proposed lesion effect minimization technique by comparing the measured volumes before and after introducing artificial WM lesions to healthy images. The proposed approach performing end-to-end inpainting and segmentation effectively reduces the error introduced by small and large WM lesions in the resulting volumetry, obtaining absolute volume differences of 0.01 ± 0.03% for GM and 0.02 ± 0.04% for WM. Increasing the accuracy and reliability of automated brain volumetry methods will reduce the sample size needed to establish meaningful correlations in clinical studies and allow its use in individualized assessments as a diagnostic and prognostic marker for neurodegenerative pathologies.


Assuntos
Aprendizado Profundo , Esclerose Múltipla , Substância Branca , Humanos , Substância Branca/diagnóstico por imagem , Substância Branca/patologia , Reprodutibilidade dos Testes , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Esclerose Múltipla/diagnóstico por imagem , Esclerose Múltipla/patologia , Processamento de Imagem Assistida por Computador/métodos
12.
Neuroimage Clin ; 38: 103376, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36940621

RESUMO

The application of convolutional neural networks (CNNs) to MRI data has emerged as a promising approach to achieving unprecedented levels of accuracy when predicting the course of neurological conditions, including multiple sclerosis, by means of extracting image features not detectable through conventional methods. Additionally, the study of CNN-derived attention maps, which indicate the most relevant anatomical features for CNN-based decisions, has the potential to uncover key disease mechanisms leading to disability accumulation. From a cohort of patients prospectively followed up after a first demyelinating attack, we selected those with T1-weighted and T2-FLAIR brain MRI sequences available for image analysis and a clinical assessment performed within the following six months (N = 319). Patients were divided into two groups according to expanded disability status scale (EDSS) score: ≥3.0 and < 3.0. A 3D-CNN model predicted the class using whole-brain MRI scans as input. A comparison with a logistic regression (LR) model using volumetric measurements as explanatory variables and a validation of the CNN model on an independent dataset with similar characteristics (N = 440) were also performed. The layer-wise relevance propagation method was used to obtain individual attention maps. The CNN model achieved a mean accuracy of 79% and proved to be superior to the equivalent LR-model (77%). Additionally, the model was successfully validated in the independent external cohort without any re-training (accuracy = 71%). Attention-map analyses revealed the predominant role of frontotemporal cortex and cerebellum for CNN decisions, suggesting that the mechanisms leading to disability accrual exceed the mere presence of brain lesions or atrophy and probably involve how damage is distributed in the central nervous system.


Assuntos
Aprendizado Profundo , Esclerose Múltipla , Humanos , Esclerose Múltipla/diagnóstico por imagem , Esclerose Múltipla/patologia , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Atenção , Cegueira/patologia
13.
Neuroradiology ; 54(8): 787-807, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22179659

RESUMO

INTRODUCTION: Multiple sclerosis (MS) is a serious disease typically occurring in the brain whose diagnosis and efficacy of treatment monitoring are vital. Magnetic resonance imaging (MRI) is frequently used in serial brain imaging due to the rich and detailed information provided. METHODS: Time-series analysis of images is widely used for MS diagnosis and patient follow-up. However, conventional manual methods are time-consuming, subjective, and error-prone. Thus, the development of automated techniques for the detection and quantification of MS lesions is a major challenge. RESULTS: This paper presents an up-to-date review of the approaches which deal with the time-series analysis of brain MRI for detecting active MS lesions and quantifying lesion load change. We provide a comprehensive reference source for researchers in which several approaches to change detection and quantification of MS lesions are investigated and classified. We also analyze the results provided by the approaches, discuss open problems, and point out possible future trends. CONCLUSION: Lesion detection approaches are required for the detection of static lesions and for diagnostic purposes, while either quantification of detected lesions or change detection algorithms are needed to follow up MS patients. However, there is not yet a single approach that can emerge as a standard for the clinical practice, automatically providing an accurate MS lesion evolution quantification. Future trends will focus on combining the lesion detection in single studies with the analysis of the change detection in serial MRI.


Assuntos
Encéfalo/patologia , Imageamento por Ressonância Magnética/métodos , Esclerose Múltipla/patologia , Reconhecimento Automatizado de Padrão , Meios de Contraste , Progressão da Doença , Humanos
14.
Clin Pract ; 12(3): 350-362, 2022 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-35645317

RESUMO

The aim of this study is to show the usefulness of collaborative work in the evaluation of prostate cancer from T2-weighted MRI using a dedicated software tool. The variability of annotations on images of the prostate gland (central and peripheral zones as well as tumour) by two independent experts was firstly evaluated, and secondly compared with a consensus between these two experts. Using a prostate MRI database, experts drew regions of interest (ROIs) corresponding to healthy prostate (peripheral and central zones) and cancer. One of the experts then drew the ROI with knowledge of the other expert's ROI. The surface area of each ROI was used to measure the Hausdorff distance and the Dice coefficient was measured from the respective contours. They were evaluated between the different experiments, taking the annotations of the second expert as the reference. The results showed that the significant differences between the two experts disappeared with collaborative work. To conclude, this study shows that collaborative work with a dedicated tool allows consensus between expertise in the evaluation of prostate cancer from T2-weighted MRI.

15.
Front Neurosci ; 16: 954662, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36248650

RESUMO

The assessment of disease activity using serial brain MRI scans is one of the most valuable strategies for monitoring treatment response in patients with multiple sclerosis (MS) receiving disease-modifying treatments. Recently, several deep learning approaches have been proposed to improve this analysis, obtaining a good trade-off between sensitivity and specificity, especially when using T1-w and T2-FLAIR images as inputs. However, the need to acquire two different types of images is time-consuming, costly and not always available in clinical practice. In this paper, we investigate an approach to generate synthetic T1-w images from T2-FLAIR images and subsequently analyse the impact of using original and synthetic T1-w images on the performance of a state-of-the-art approach for longitudinal MS lesion detection. We evaluate our approach on a dataset containing 136 images from MS patients, and 73 images with lesion activity (the appearance of new T2 lesions in follow-up scans). To evaluate the synthesis of the images, we analyse the structural similarity index metric and the median absolute error and obtain consistent results. To study the impact of synthetic T1-w images, we evaluate the performance of the new lesion detection approach when using (1) both T2-FLAIR and T1-w original images, (2) only T2-FLAIR images, and (3) both T2-FLAIR and synthetic T1-w images. Sensitivities of 0.75, 0.63, and 0.81, respectively, were obtained at the same false-positive rate (0.14) for all experiments. In addition, we also present the results obtained when using the data from the international MSSEG-2 challenge, showing also an improvement when including synthetic T1-w images. In conclusion, we show that the use of synthetic images can support the lack of data or even be used instead of the original image to homogenize the contrast of the different acquisitions in new T2 lesions detection algorithms.

16.
Front Neurosci ; 16: 1007619, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36507318

RESUMO

Longitudinal magnetic resonance imaging (MRI) has an important role in multiple sclerosis (MS) diagnosis and follow-up. Specifically, the presence of new lesions on brain MRI scans is considered a robust predictive biomarker for the disease progression. New lesions are a high-impact prognostic factor to predict evolution to MS or risk of disability accumulation over time. However, the detection of this disease activity is performed visually by comparing the follow-up and baseline scans. Due to the presence of small lesions, misregistration, and high inter-/intra-observer variability, this detection of new lesions is prone to errors. In this direction, one of the last Medical Image Computing and Computer Assisted Intervention (MICCAI) challenges was dealing with this automatic new lesion quantification. The MSSEG-2: MS new lesions segmentation challenge offers an evaluation framework for this new lesion segmentation task with a large database (100 patients, each with two-time points) compiled from the OFSEP (Observatoire français de la sclérose en plaques) cohort, the French MS registry, including 3D T2-w fluid-attenuated inversion recovery (T2-FLAIR) images from different centers and scanners. Apart from a change in centers, MRI scanners, and acquisition protocols, there are more challenges that hinder the automated detection process of new lesions such as the need for large annotated datasets, which may be not easily available, or the fact that new lesions are small areas producing a class imbalance problem that could bias trained models toward the non-lesion class. In this article, we present a novel automated method for new lesion detection of MS patient images. Our approach is based on a cascade of two 3D patch-wise fully convolutional neural networks (FCNNs). The first FCNN is trained to be more sensitive revealing possible candidate new lesion voxels, while the second FCNN is trained to reduce the number of misclassified voxels coming from the first network. 3D T2-FLAIR images from the two-time points were pre-processed and linearly co-registered. Afterward, a fully CNN, where its inputs were only the baseline and follow-up images, was trained to detect new MS lesions. Our approach obtained a mean segmentation dice similarity coefficient of 0.42 with a detection F1-score of 0.5. Compared to the challenge participants, we obtained one of the highest precision scores (PPVL = 0.52), the best PPVL rate (0.53), and a lesion detection sensitivity (SensL of 0.53).

17.
Neuroinformatics ; 19(3): 477-492, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33389607

RESUMO

Brain atrophy quantification plays a fundamental role in neuroinformatics since it permits studying brain development and neurological disorders. However, the lack of a ground truth prevents testing the accuracy of longitudinal atrophy quantification methods. We propose a deep learning framework to generate longitudinal datasets by deforming T1-w brain magnetic resonance imaging scans as requested through segmentation maps. Our proposal incorporates a cascaded multi-path U-Net optimised with a multi-objective loss which allows its paths to generate different brain regions accurately. We provided our model with baseline scans and real follow-up segmentation maps from two longitudinal datasets, ADNI and OASIS, and observed that our framework could produce synthetic follow-up scans that matched the real ones (Total scans= 584; Median absolute error: 0.03 ± 0.02; Structural similarity index: 0.98 ± 0.02; Dice similarity coefficient: 0.95 ± 0.02; Percentage of brain volume change: 0.24 ± 0.16; Jacobian integration: 1.13 ± 0.05). Compared to two relevant works generating brain lesions using U-Nets and conditional generative adversarial networks (CGAN), our proposal outperformed them significantly in most cases (p < 0.01), except in the delineation of brain edges where the CGAN took the lead (Jacobian integration: Ours - 1.13 ± 0.05 vs CGAN - 1.00 ± 0.02; p < 0.01). We examined whether changes induced with our framework were detected by FAST, SPM, SIENA, SIENAX, and the Jacobian integration method. We observed that induced and detected changes were highly correlated (Adj. R2 > 0.86). Our preliminary results on harmonised datasets showed the potential of our framework to be applied to various data collections without further adjustment.


Assuntos
Imageamento por Ressonância Magnética , Redes Neurais de Computação , Atrofia , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Humanos , Processamento de Imagem Assistida por Computador
18.
Front Neurosci ; 15: 608808, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33994917

RESUMO

Segmentation of brain images from Magnetic Resonance Images (MRI) is an indispensable step in clinical practice. Morphological changes of sub-cortical brain structures and quantification of brain lesions are considered biomarkers of neurological and neurodegenerative disorders and used for diagnosis, treatment planning, and monitoring disease progression. In recent years, deep learning methods showed an outstanding performance in medical image segmentation. However, these methods suffer from generalisability problem due to inter-centre and inter-scanner variabilities of the MRI images. The main objective of the study is to develop an automated deep learning segmentation approach that is accurate and robust to the variabilities in scanner and acquisition protocols. In this paper, we propose a transductive transfer learning approach for domain adaptation to reduce the domain-shift effect in brain MRI segmentation. The transductive scenario assumes that there are sets of images from two different domains: (1) source-images with manually annotated labels; and (2) target-images without expert annotations. Then, the network is jointly optimised integrating both source and target images into the transductive training process to segment the regions of interest and to minimise the domain-shift effect. We proposed to use a histogram loss in the feature level to carry out the latter optimisation problem. In order to demonstrate the benefit of the proposed approach, the method has been tested in two different brain MRI image segmentation problems using multi-centre and multi-scanner databases for: (1) sub-cortical brain structure segmentation; and (2) white matter hyperintensities segmentation. The experiments showed that the segmentation performance of a pre-trained model could be significantly improved by up to 10%. For the first segmentation problem it was possible to achieve a maximum improvement from 0.680 to 0.799 in average Dice Similarity Coefficient (DSC) metric and for the second problem the average DSC improved from 0.504 to 0.602. Moreover, the improvements after domain adaptation were on par or showed better performance compared to the commonly used traditional unsupervised segmentation methods (FIRST and LST), also achieving faster execution time. Taking this into account, this work presents one more step toward the practical implementation of deep learning algorithms into the clinical routine.

19.
Comput Med Imaging Graph ; 90: 101908, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33901919

RESUMO

Hemorrhagic stroke is the condition involving the rupture of a vessel inside the brain and is characterized by high mortality rates. Even if the patient survives, stroke can cause temporary or permanent disability depending on how long blood flow has been interrupted. Therefore, it is crucial to act fast to prevent irreversible damage. In this work, a deep learning-based approach to automatically segment hemorrhagic stroke lesions in CT scans is proposed. Our approach is based on a 3D U-Net architecture which incorporates the recently proposed squeeze-and-excitation blocks. Moreover, a restrictive patch sampling is proposed to alleviate the class imbalance problem and also to deal with the issue of intra-ventricular hemorrhage, which has not been considered as a stroke lesion in our study. Moreover, we also analyzed the effect of patch size, the use of different modalities, data augmentation and the incorporation of different loss functions on the segmentation results. All analyses have been performed using a five fold cross-validation strategy on a clinical dataset composed of 76 cases. Obtained results demonstrate that the introduction of squeeze-and-excitation blocks, together with the restrictive patch sampling and symmetric modality augmentation, significantly improved the obtained results, achieving a mean DSC of 0.86±0.074, showing promising automated segmentation results.


Assuntos
Acidente Vascular Cerebral Hemorrágico , Acidente Vascular Cerebral , Humanos , Processamento de Imagem Assistida por Computador , Acidente Vascular Cerebral/diagnóstico por imagem , Tomografia Computadorizada por Raios X
20.
J Digit Imaging ; 23(5): 527-37, 2010 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-19506953

RESUMO

Studies reported in the literature indicate that the increase in the breast density is one of the strongest indicators of developing breast cancer. In this paper, we present an approach to automatically evaluate the density of a breast by segmenting its internal parenchyma in either fatty or dense class. Our approach is based on a statistical analysis of each pixel neighbourhood for modelling both tissue types. Therefore, we provide connected density clusters taking the spatial information of the breast into account. With the aim of showing the robustness of our approach, the experiments are performed using two different databases: the well-known Mammographic Image Analysis Society digitised database and a new full-field digital database of mammograms from which we have annotations provided by radiologists. Quantitative and qualitative results show that our approach is able to correctly detect dense breasts, segmenting the tissue type accordingly.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Densitometria/métodos , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Mama/patologia , Neoplasias da Mama/patologia , Feminino , Humanos , Intensificação de Imagem Radiográfica/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA