Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
1.
Med Image Anal ; 93: 103085, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38219499

RESUMO

Recently, deep reinforcement learning (RL) has been proposed to learn the tractography procedure and train agents to reconstruct the structure of the white matter without manually curated reference streamlines. While the performances reported were competitive, the proposed framework is complex, and little is still known about the role and impact of its multiple parts. In this work, we thoroughly explore the different components of the proposed framework, such as the choice of the RL algorithm, seeding strategy, the input signal and reward function, and shed light on their impact. Approximately 7,400 models were trained for this work, totalling nearly 41,000 h of GPU time. Our goal is to guide researchers eager to explore the possibilities of deep RL for tractography by exposing what works and what does not work with the category of approach. As such, we ultimately propose a series of recommendations concerning the choice of RL algorithm, the input to the agents, the reward function and more to help future work using reinforcement learning for tractography. We also release the open source codebase, trained models, and datasets for users and researchers wanting to explore reinforcement learning for tractography.


Assuntos
Aprendizagem , Reforço Psicológico , Humanos , Recompensa , Algoritmos
2.
Neuroimage ; 279: 120288, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37495198

RESUMO

White matter bundle segmentation is a cornerstone of modern tractography to study the brain's structural connectivity in domains such as neurological disorders, neurosurgery, and aging. In this study, we present FIESTA (FIbEr Segmentation in Tractography using Autoencoders), a reliable and robust, fully automated, and easily semi-automatically calibrated pipeline based on deep autoencoders that can dissect and fully populate white matter bundles. This pipeline is built upon previous works that demonstrated how autoencoders can be used successfully for streamline filtering, bundle segmentation, and streamline generation in tractography. Our proposed method improves bundle segmentation coverage by recovering hard-to-track bundles with generative sampling through the latent space seeding of the subject bundle and the atlas bundle. A latent space of streamlines is learned using autoencoder-based modeling combined with contrastive learning. Using an atlas of bundles in standard space (MNI), our proposed method segments new tractograms using the autoencoder latent distance between each tractogram streamline and its closest neighbor bundle in the atlas of bundles. Intra-subject bundle reliability is improved by recovering hard-to-track streamlines, using the autoencoder to generate new streamlines that increase the spatial coverage of each bundle while remaining anatomically correct. Results show that our method is more reliable than state-of-the-art automated virtual dissection methods such as RecoBundles, RecoBundlesX, TractSeg, White Matter Analysis and XTRACT. Our framework allows for the transition from one anatomical bundle definition to another with marginal calibration efforts. Overall, these results show that our framework improves the practicality and usability of current state-of-the-art bundle segmentation framework.


Assuntos
Imagem de Tensor de Difusão , Substância Branca , Humanos , Imagem de Tensor de Difusão/métodos , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Substância Branca/diagnóstico por imagem , Dissecação , Encéfalo/diagnóstico por imagem
3.
J Imaging ; 9(3)2023 Mar 16.
Artigo em Inglês | MEDLINE | ID: mdl-36976120

RESUMO

Generative adversarial networks (GANs) have become increasingly powerful, generating mind-blowing photorealistic images that mimic the content of datasets they have been trained to replicate. One recurrent theme in medical imaging, is whether GANs can also be as effective at generating workable medical data, as they are for generating realistic RGB images. In this paper, we perform a multi-GAN and multi-application study, to gauge the benefits of GANs in medical imaging. We tested various GAN architectures, from basic DCGAN to more sophisticated style-based GANs, on three medical imaging modalities and organs, namely: cardiac cine-MRI, liver CT, and RGB retina images. GANs were trained on well-known and widely utilized datasets, from which their FID scores were computed, to measure the visual acuity of their generated images. We further tested their usefulness by measuring the segmentation accuracy of a U-Net trained on these generated images and the original data. The results reveal that GANs are far from being equal, as some are ill-suited for medical imaging applications, while others performed much better. The top-performing GANs are capable of generating realistic-looking medical images by FID standards, that can fool trained experts in a visual Turing test and comply to some metrics. However, segmentation results suggest that no GAN is capable of reproducing the full richness of medical datasets.

4.
Med Image Anal ; 85: 102761, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36773366

RESUMO

Current tractography methods use the local orientation information to propagate streamlines from seed locations. Many such seeds provide streamlines that stop prematurely or fail to map the true white matter pathways because some bundles are "harder-to-track" than others. This results in tractography reconstructions with poor white and gray matter spatial coverage. In this work, we propose a generative, autoencoder-based method, named GESTA (Generative Sampling in Bundle Tractography using Autoencoders), that produces streamlines achieving better spatial coverage. Compared to other deep learning methods, our autoencoder-based framework uses a single model to generate streamlines in a bundle-wise fashion, and does not require to propagate local orientations. GESTA produces new and complete streamlines for any given white matter bundle, including hard-to-track bundles. Applied on top of a given tractogram, GESTA is shown to be effective in improving the white matter volume coverage in poorly populated bundles, both on synthetic and human brain in vivo data. Our streamline evaluation framework ensures that the streamlines produced by GESTA are anatomically plausible and fit well to the local diffusion signal. The streamline evaluation criteria assess anatomy (white matter coverage), local orientation alignment (direction), and geometry features of streamlines, and optionally, gray matter connectivity. The GESTA framework offers considerable gains in bundle overlap using a reduced set of seeding streamlines with a 1.5x improvement for the "Fiber Cup", and 6x for the ISMRM 2015 Tractography Challenge datasets. Similarly, it provides a 4x white matter volume increase on the BIL&GIN callosal homotopic dataset, and successfully populates bundles on the multi-subject, multi-site, whole-brain in vivo TractoInferno dataset. GESTA is thus a novel deep generative bundle tractography method that can be used to improve the tractography reconstruction of the white matter.


Assuntos
Imagem de Tensor de Difusão , Substância Branca , Humanos , Imagem de Tensor de Difusão/métodos , Processamento de Imagem Assistida por Computador/métodos , Encéfalo/anatomia & histologia , Substância Branca/anatomia & histologia , Corpo Caloso
5.
Sci Data ; 9(1): 725, 2022 11 25.
Artigo em Inglês | MEDLINE | ID: mdl-36433966

RESUMO

TractoInferno is the world's largest open-source multi-site tractography database, including both research- and clinical-like human acquisitions, aimed specifically at machine learning tractography approaches and related ML algorithms. It provides 284 samples acquired from 3 T scanners across 6 different sites. Available data includes T1-weighted images, single-shell diffusion MRI (dMRI) acquisitions, spherical harmonics fitted to the dMRI signal, fiber ODFs, and reference streamlines for 30 delineated bundles generated using 4 tractography algorithms, as well as masks needed to run tractography algorithms. Manual quality control was additionally performed at multiple steps of the pipeline. We showcase TractoInferno by benchmarking the learn2track algorithm and 5 variations of the same recurrent neural network architecture. Creating the TractoInferno database required approximately 20,000 CPU-hours of processing power, 200 man-hours of manual QC, 3,000 GPU-hours of training baseline models, and 4 Tb of storage, to produce a final database of 350 Gb. By providing a standardized training dataset and evaluation protocol, TractoInferno is an excellent tool to address common issues in machine learning tractography.

6.
IEEE Trans Med Imaging ; 41(10): 2867-2878, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35533176

RESUMO

Convolutional neural networks (CNN) have demonstrated their ability to segment 2D cardiac ultrasound images. However, despite recent successes according to which the intra-observer variability on end-diastole and end-systole images has been reached, CNNs still struggle to leverage temporal information to provide accurate and temporally consistent segmentation maps across the whole cycle. Such consistency is required to accurately describe the cardiac function, a necessary step in diagnosing many cardiovascular diseases. In this paper, we propose a framework to learn the 2D+time apical long-axis cardiac shape such that the segmented sequences can benefit from temporal and anatomical consistency constraints. Our method is a post-processing that takes as input segmented echocardiographic sequences produced by any state-of-the-art method and processes it in two steps to (i) identify spatio-temporal inconsistencies according to the overall dynamics of the cardiac sequence and (ii) correct the inconsistencies. The identification and correction of cardiac inconsistencies relies on a constrained autoencoder trained to learn a physiologically interpretable embedding of cardiac shapes, where we can both detect and fix anomalies. We tested our framework on 98 full-cycle sequences from the CAMUS dataset, which are available alongside this paper. Our temporal regularization method not only improves the accuracy of the segmentation across the whole sequences, but also enforces temporal and anatomical consistency.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Ecocardiografia/métodos , Coração/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Variações Dependentes do Observador
7.
Med Image Anal ; 77: 102347, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35085952

RESUMO

Multiparametric magnetic resonance imaging (mp-MRI) has shown excellent results in the detection of prostate cancer (PCa). However, characterizing prostate lesions aggressiveness in mp-MRI sequences is impossible in clinical practice, and biopsy remains the reference to determine the Gleason score (GS). In this work, we propose a novel end-to-end multi-class network that jointly segments the prostate gland and cancer lesions with GS group grading. After encoding the information on a latent space, the network is separated in two branches: 1) the first branch performs prostate segmentation 2) the second branch uses this zonal prior as an attention gate for the detection and grading of prostate lesions. The model was trained and validated with a 5-fold cross-validation on a heterogeneous series of 219 MRI exams acquired on three different scanners prior prostatectomy. In the free-response receiver operating characteristics (FROC) analysis for clinically significant lesions (defined as GS >6) detection, our model achieves 69.0%±14.5% sensitivity at 2.9 false positive per patient on the whole prostate and  70.8%±14.4% sensitivity at 1.5 false positive when considering the peripheral zone (PZ) only. Regarding the automatic GS group grading, Cohen's quadratic weighted kappa coefficient (κ) is 0.418±0.138, which is the best reported lesion-wise kappa for GS segmentation to our knowledge. The model has encouraging generalization capacities with κ=0.120±0.092 on the PROSTATEx-2 public dataset and achieves state-of-the-art performance for the segmentation of the whole prostate gland with a Dice of 0.875±0.013. Finally, we show that ProstAttention-Net improves performance in comparison to reference segmentation models, including U-Net, DeepLabv3+ and E-Net. The proposed attention mechanism is also shown to outperform Attention U-Net.


Assuntos
Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata , Humanos , Imageamento por Ressonância Magnética , Masculino , Gradação de Tumores , Próstata , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia
8.
Front Neuroimaging ; 1: 917806, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37555143

RESUMO

Modern tractography algorithms such as anatomically-constrained tractography (ACT) are based on segmentation maps of white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). These maps are generally estimated from a T1-weighted (T1w) image and then registered in diffusion weighted images (DWI) space. Registration of T1w to diffusion space and partial volume estimation are challenging and rarely voxel-perfect. Diffusion-based segmentation would, thus, potentially allow not to have higher quality anatomical priors injected in the tractography process. On the other hand, even if FA-based tractography is possible without T1 registration, the literature shows that this technique suffers from multiple issues such as holes in the tracking mask and a high proportion of generated broken and anatomically implausible streamlines. Therefore, there is an important need for a tissue segmentation algorithm that works directly in the native diffusion space. We propose DORIS, a DWI-based deep learning segmentation algorithm. DORIS outputs 10 different tissue classes including WM, GM, CSF, ventricles, and 6 other subcortical structures (putamen, pallidum, hippocampus, caudate, amygdala, and thalamus). DORIS was trained and validated on a wide range of subjects, including 1,000 individuals from 22 to 90 years old from clinical and research DWI acquisitions, from 5 public databases. In the absence of a "true" ground truth in diffusion space, DORIS used a silver standard strategy from Freesurfer output registered onto the DWI. This strategy is extensively evaluated and discussed in the current study. Segmentation maps provided by DORIS are quantitatively compared to Freesurfer and FSL-fast and the impacts on tractography are evaluated. Overall, we show that DORIS is fast, accurate, and reproducible and that DORIS-based tractograms produce bundles with a longer mean length and fewer anatomically implausible streamlines.

9.
Front Neuroimaging ; 1: 930496, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37555146

RESUMO

The physical and clinical constraints surrounding diffusion-weighted imaging (DWI) often limit the spatial resolution of the produced images to voxels up to eight times larger than those of T1w images. The detailed information contained in accessible high-resolution T1w images could help in the synthesis of diffusion images with a greater level of detail. However, the non-Euclidean nature of diffusion imaging hinders current deep generative models from synthesizing physically plausible images. In this work, we propose the first Riemannian network architecture for the direct generation of diffusion tensors (DT) and diffusion orientation distribution functions (dODFs) from high-resolution T1w images. Our integration of the log-Euclidean Metric into a learning objective guarantees, unlike standard Euclidean networks, the mathematically-valid synthesis of diffusion. Furthermore, our approach improves the fractional anisotropy mean squared error (FA MSE) between the synthesized diffusion and the ground-truth by more than 23% and the cosine similarity between principal directions by almost 5% when compared to our baselines. We validate our generated diffusion by comparing the resulting tractograms to our expected real data. We observe similar fiber bundles with streamlines having <3% difference in length, <1% difference in volume, and a visually close shape. While our method is able to generate diffusion images from structural inputs in a high-resolution space within 15 s, we acknowledge and discuss the limits of diffusion inference solely relying on T1w images. Our results nonetheless suggest a relationship between the high-level geometry of the brain and its overall white matter architecture that remains to be explored.

10.
Can J Cardiol ; 38(2): 196-203, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34780990

RESUMO

Generative adversarial networks (GANs) are state-of-the-art neural network models used to synthesise images and other data. GANs brought a considerable improvement to the quality of synthetic data, quickly becoming the standard for data-generation tasks. In this work, we summarise the applications of GANs in the field of cardiology, including generation of realistic cardiac images, electrocardiography signals, and synthetic electronic health records. The utility of GAN-generated data is discussed with respect to research, clinical care, and academia. And we present illustrative examples of our GAN-generated cardiac magnetic resonance and echocardiography images, showing the evolution in image quality across 6 different models, which have become almost indistinguishable from real images. Finally, we discuss future applications, such as modality translation or patient trajectory modelling. Moreover, we discuss the pending challenges that GANs need to overcome, namely, their training dynamics, the medical fidelity or the data regulations and ethics questions, to become integrated in cardiology workflows.


Assuntos
Cardiologia , Diagnóstico por Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Humanos
11.
Med Image Anal ; 72: 102126, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34161915

RESUMO

Current brain white matter fiber tracking techniques show a number of problems, including: generating large proportions of streamlines that do not accurately describe the underlying anatomy; extracting streamlines that are not supported by the underlying diffusion signal; and under-representing some fiber populations, among others. In this paper, we describe a novel autoencoder-based learning method to filter streamlines from diffusion MRI tractography, and hence, to obtain more reliable tractograms. Our method, dubbed FINTA (Filtering in Tractography using Autoencoders) uses raw, unlabeled tractograms to train the autoencoder, and to learn a robust representation of brain streamlines. Such an embedding is then used to filter undesired streamline samples using a nearest neighbor algorithm. Our experiments on both synthetic and in vivo human brain diffusion MRI tractography data obtain accuracy scores exceeding the 90% threshold on the test set. Results reveal that FINTA has a superior filtering performance compared to conventional, anatomy-based methods, and the RecoBundles state-of-the-art method. Additionally, we demonstrate that FINTA can be applied to partial tractograms without requiring changes to the framework. We also show that the proposed method generalizes well across different tracking methods and datasets, and shortens significantly the computation time for large (>1 M streamlines) tractograms. Together, this work brings forward a new deep learning framework in tractography based on autoencoders, which offers a flexible and powerful method for white matter filtering and bundling that could enhance tractometry and connectivity analyses.


Assuntos
Processamento de Imagem Assistida por Computador , Substância Branca , Algoritmos , Encéfalo/diagnóstico por imagem , Imagem de Tensor de Difusão , Humanos , Substância Branca/diagnóstico por imagem
12.
Med Image Anal ; 72: 102093, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34023562

RESUMO

Diffusion MRI tractography is currently the only non-invasive tool able to assess the white-matter structural connectivity of a brain. Since its inception, it has been widely documented that tractography is prone to producing erroneous tracks while missing true positive connections. Recently, supervised learning algorithms have been proposed to learn the tracking procedure implicitly from data, without relying on anatomical priors. However, these methods rely on curated streamlines that are very hard to obtain. To remove the need for such data but still leverage the expressiveness of neural networks, we introduce Track-To-Learn: A general framework to pose tractography as a deep reinforcement learning problem. Deep reinforcement learning is a type of machine learning that does not depend on ground-truth data but rather on the concept of "reward". We implement and train algorithms to maximize returns from a reward function based on the alignment of streamlines with principal directions extracted from diffusion data. We show competitive results on known data and little loss of performance when generalizing to new, unseen data, compared to prior machine learning-based tractography algorithms. To the best of our knowledge, this is the first successful use of deep reinforcement learning for tractography.


Assuntos
Imagem de Tensor de Difusão , Substância Branca , Algoritmos , Encéfalo/diagnóstico por imagem , Humanos , Redes Neurais de Computação
13.
IEEE Trans Med Imaging ; 40(7): 1737-1749, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33710953

RESUMO

This paper presents a client/server privacy-preserving network in the context of multicentric medical image analysis. Our approach is based on adversarial learning which encodes images to obfuscate the patient identity while preserving enough information for a target task. Our novel architecture is composed of three components: 1) an encoder network which removes identity-specific features from input medical images, 2) a discriminator network that attempts to identify the subject from the encoded images, 3) a medical image analysis network which analyzes the content of the encoded images (segmentation in our case). By simultaneously fooling the discriminator and optimizing the medical analysis network, the encoder learns to remove privacy-specific features while keeping those essentials for the target task. Our approach is illustrated on the problem of segmenting brain MRI from the large-scale Parkinson Progression Marker Initiative (PPMI) dataset. Using longitudinal data from PPMI, we show that the discriminator learns to heavily distort input images while allowing for highly accurate segmentation results. Our results also demonstrate that an encoder trained on the PPMI dataset can be used for segmenting other datasets, without the need for retraining. The code is made available at: https://github.com/bachkimn/Privacy-Net-An-Adversarial-Approach-forIdentity-Obfuscated-Segmentation-of-MedicalImages.


Assuntos
Processamento de Imagem Assistida por Computador , Privacidade , Humanos , Imageamento por Ressonância Magnética
14.
IEEE Trans Med Imaging ; 39(11): 3703-3713, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32746116

RESUMO

Convolutional neural networks (CNN) have had unprecedented success in medical imaging and, in particular, in medical image segmentation. However, despite the fact that segmentation results are closer than ever to the inter-expert variability, CNNs are not immune to producing anatomically inaccurate segmentations, even when built upon a shape prior. In this paper, we present a framework for producing cardiac image segmentation maps that are guaranteed to respect pre-defined anatomical criteria, while remaining within the inter-expert variability. The idea behind our method is to use a well-trained CNN, have it process cardiac images, identify the anatomically implausible results and warp these results toward the closest anatomically valid cardiac shape. This warping procedure is carried out with a constrained variational autoencoder (cVAE) trained to learn a representation of valid cardiac shapes through a smooth, yet constrained, latent space. With this cVAE, we can project any implausible shape into the cardiac latent space and steer it toward the closest correct shape. We tested our framework on short-axis MRI as well as apical two and four-chamber view ultrasound images, two modalities for which cardiac shapes are drastically different. With our method, CNNs can now produce results that are both within the inter-expert variability and always anatomically plausible without having to rely on a shape prior.


Assuntos
Imageamento por Ressonância Magnética , Redes Neurais de Computação , Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Ultrassonografia
15.
Artigo em Inglês | MEDLINE | ID: mdl-32746187

RESUMO

Segmentation of cardiac structures is one of the fundamental steps to estimate volumetric indices of the heart. This step is still performed semiautomatically in clinical routine and is, thus, prone to interobserver and intraobserver variabilities. Recent studies have shown that deep learning has the potential to perform fully automatic segmentation. However, the current best solutions still suffer from a lack of robustness in terms of accuracy and number of outliers. The goal of this work is to introduce a novel network designed to improve the overall segmentation accuracy of left ventricular structures (endocardial and epicardial borders) while enhancing the estimation of the corresponding clinical indices and reducing the number of outliers. This network is based on a multistage framework where both the localization and segmentation steps are optimized jointly through an end-to-end scheme. Results obtained on a large open access data set show that our method outperforms the current best-performing deep learning solution with a lighter architecture and achieved an overall segmentation accuracy lower than the intraobserver variability for the epicardial border (i.e., on average a mean absolute error of 1.5 mm and a Hausdorff distance of 5.1mm) with 11% of outliers. Moreover, we demonstrate that our method can closely reproduce the expert analysis for the end-diastolic and end-systolic left ventricular volumes, with a mean correlation of 0.96 and a mean absolute error of 7.6 ml. Concerning the ejection fraction of the left ventricle, results are more contrasted with a mean correlation coefficient of 0.83 and an absolute mean error of 5.0%, producing scores that are slightly below the intraobserver margin. Based on this observation, areas for improvement are suggested.


Assuntos
Aprendizado Profundo , Ecocardiografia/métodos , Ventrículos do Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Humanos
16.
Front Aging Neurosci ; 11: 270, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31632265

RESUMO

Recent evidence shows that neuroinflammation plays a role in many neurological diseases including mild cognitive impairment (MCI) and Alzheimer's disease (AD), and that free water (FW) modeling from clinically acquired diffusion MRI (DTI-like acquisitions) can be sensitive to this phenomenon. This FW index measures the fraction of the diffusion signal explained by isotropically unconstrained water, as estimated from a bi-tensor model. In this study, we developed a simple but powerful whole-brain FW measure designed for easy translation to clinical settings and potential use as a priori outcome measure in clinical trials. These simple FW measures use a "safe" white matter (WM) mask without gray matter (GM)/CSF partial volume contamination (WM safe) near ventricles and sulci. We investigated if FW inside the WM safe mask, including and excluding areas of white matter damage such as white matter hyperintensities (WMHs) as shown on T2 FLAIR, computed across the whole white matter could be indicative of diagnostic grouping along the AD continuum. After careful quality control, 81 cognitively normal controls (NC), 103 subjects with MCI and 42 with AD were selected from the ADNIGO and ADNI2 databases. We show that MCI and AD have significantly higher FW measures even after removing all partial volume contamination. We also show, for the first time, that when WMHs are removed from the masks, the significant results are maintained, which demonstrates that the FW measures are not just a byproduct of WMHs. Our new and simple FW measures can be used to increase our understanding of the role of inflammation-associated edema in AD and may aid in the differentiation of healthy subjects from MCI and AD patients.

17.
Magn Reson Imaging ; 64: 37-48, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31078615

RESUMO

Supervised machine learning (ML) algorithms have recently been proposed as an alternative to traditional tractography methods in order to address some of their weaknesses. They can be path-based and local-model-free, and easily incorporate anatomical priors to make contextual and non-local decisions that should help the tracking process. ML-based techniques have thus shown promising reconstructions of larger spatial extent of existing white matter bundles, promising reconstructions of less false positives, and promising robustness to known position and shape biases of current tractography techniques. But as of today, none of these ML-based methods have shown conclusive performances or have been adopted as a de facto solution to tractography. One reason for this might be the lack of well-defined and extensive frameworks to train, evaluate, and compare these methods. In this paper, we describe several datasets and evaluation tools that contain useful features for ML algorithms, along with the various methods proposed in the recent years. We then discuss the strategies that are used to evaluate and compare those methods, as well as their shortcomings. Finally, we describe the particular needs of ML tractography methods and discuss tangible solutions for future works.


Assuntos
Encéfalo/anatomia & histologia , Imagem de Tensor de Difusão/métodos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Algoritmos , Humanos
18.
PLoS One ; 14(2): e0211944, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30794559

RESUMO

Tissue segmentation and classification in MRI is a challenging task due to a lack of signal intensity standardization. MRI signal is dependent on the acquisition protocol, the coil profile, the scanner type, etc. While we can compute quantitative physical tissue properties independent of the hardware and the sequence parameters, it is still difficult to leverage these physical properties to segment and classify pelvic tissues. The proposed method integrates quantitative MRI values (T1 and T2 relaxation times and pure synthetic weighted images) and machine learning (Support Vector Machine (SVM)) to segment and classify tissues in the pelvic region, i.e.: fat, muscle, prostate, bone marrow, bladder, and air. Twenty-two men with a mean age of 30±14 years were included in this prospective study. The images were acquired with a 3 Tesla MRI scanner. An inversion recovery-prepared turbo spin echo sequence was used to obtain T1-weighted images at different inversion times with a TR of 14000 ms. A 32-echo spin echo sequence was used to obtain the T2-weighted images at different echo times with a TR of 5000 ms. T1 and T2 relaxation times, synthetic T1- and T2-weighted images and anatomical probabilistic maps were calculated and used as input features of a SVM for segmenting and classifying tissues within the pelvic region. The mean SVM classification accuracy across subjects was calculated for the different tissues: prostate (94.2%), fat (96.9%), muscle (95.8%), bone marrow (91%) and bladder (82.1%) indicating an excellent classification performance. However, the segmentation and classification for air (within the rectum) may not always be successful (mean SVM accuracy 47.5%) due to the lack of air data in the training and testing sets. Our findings suggest that SVM can reliably segment and classify tissues in the pelvic region.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Pelve/diagnóstico por imagem , Máquina de Vetores de Suporte , Adulto , Humanos , Masculino , Estudos Prospectivos
19.
IEEE Trans Med Imaging ; 38(9): 2198-2210, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-30802851

RESUMO

Delineation of the cardiac structures from 2D echocardiographic images is a common clinical task to establish a diagnosis. Over the past decades, the automation of this task has been the subject of intense research. In this paper, we evaluate how far the state-of-the-art encoder-decoder deep convolutional neural network methods can go at assessing 2D echocardiographic images, i.e., segmenting cardiac structures and estimating clinical indices, on a dataset, especially, designed to answer this objective. We, therefore, introduce the cardiac acquisitions for multi-structure ultrasound segmentation dataset, the largest publicly-available and fully-annotated dataset for the purpose of echocardiographic assessment. The dataset contains two and four-chamber acquisitions from 500 patients with reference measurements from one cardiologist on the full dataset and from three cardiologists on a fold of 50 patients. Results show that encoder-decoder-based architectures outperform state-of-the-art non-deep learning methods and faithfully reproduce the expert analysis for the end-diastolic and end-systolic left ventricular volumes, with a mean correlation of 0.95 and an absolute mean error of 9.5 ml. Concerning the ejection fraction of the left ventricle, results are more contrasted with a mean correlation coefficient of 0.80 and an absolute mean error of 5.6%. Although these results are below the inter-observer scores, they remain slightly worse than the intra-observer's ones. Based on this observation, areas for improvement are defined, which open the door for accurate and fully-automatic analysis of 2D echocardiographic images.


Assuntos
Aprendizado Profundo , Ecocardiografia/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Bases de Dados Factuais , Coração/diagnóstico por imagem , Humanos
20.
IEEE J Biomed Health Inform ; 23(3): 1119-1128, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30113903

RESUMO

In this paper, we present a novel convolutional neural network architecture to segment images from a series of short-axis cardiac magnetic resonance slices (CMRI). The proposed model is an extension of the U-net that embeds a cardiac shape prior and involves a loss function tailored to the cardiac anatomy. Since the shape prior is computed offline only once, the execution of our model is not limited by its calculation. Our system takes as input raw magnetic resonance images, requires no manual preprocessing or image cropping and is trained to segment the endocardium and epicardium of the left ventricle, the endocardium of the right ventricle, as well as the center of the left ventricle. With its multiresolution grid architecture, the network learns both high and low-level features useful to register the shape prior as well as accurately localize the borders of the cardiac regions. Experimental results obtained on the Automatic Cardiac Diagnostic Challenge - Medical Image Computing and Computer Assisted Intervention (ACDC-MICCAI) 2017 dataset show that our model segments multislices CMRI (left and right ventricle contours) in 0.18 s with an average Dice coefficient of [Formula: see text] and an average 3-D Hausdorff distance of [Formula: see text] mm.


Assuntos
Técnicas de Imagem Cardíaca/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Bases de Dados Factuais , Coração/diagnóstico por imagem , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...