Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
2.
Front Artif Intell ; 5: 1059007, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36483981

RESUMO

Cardiac computed tomography angiography (CTA) is an emerging imaging modality for assessing coronary artery as well as various cardiovascular structures. Recently, deep learning (DL) methods have been successfully applied to many applications of medical image analysis including cardiac CTA structure segmentation. However, DL requires a large amounts of data and high-quality labels for training which can be burdensome to obtain due to its labor-intensive nature. In this study, we aim to develop a fully automatic artificial intelligence (AI) system, named DeepHeartCT, for accurate and rapid cardiac CTA segmentation based on DL. The proposed system was trained using a large clinical dataset with computer-generated labels to segment various cardiovascular structures including left and right ventricles (LV, RV), left and right atria (LA, RA), and LV myocardium (LVM). This new system was trained directly using high-quality computer labels generated from our previously developed multi-atlas based AI system. In addition, a reverse ranking strategy was proposed to assess the segmentation quality in the absence of manual reference labels. This strategy allowed the new framework to assemble optimal computer-generated labels from a large dataset for effective training of a deep convolutional neural network (CNN). A large clinical cardiac CTA studies (n = 1,064) were used to train and validate our framework. The trained model was then tested on another independent dataset with manual labels (n = 60). The Dice score, Hausdorff distance and mean surface distance were used to quantify the segmentation accuracy. The proposed DeepHeartCT framework yields a high median Dice score of 0.90 [interquartile range (IQR), 0.90-0.91], a low median Hausdorff distance of 7 mm (IQR, 4-15 mm) and a low mean surface distance of 0.80 mm (IQR, 0.57-1.29 mm) across all segmented structures. An additional experiment was conducted to evaluate the proposed DL-based AI framework trained with a small vs. large dataset. The results show our framework also performed well when trained on a small optimal training dataset (n = 110) with a significantly reduced training time. These results demonstrated that the proposed DeepHeartCT framework provides accurate and rapid cardiac CTA segmentation that can be readily generalized for handling large-scale medical imaging applications.

3.
Front Aging Neurosci ; 14: 879453, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35370626

RESUMO

[This corrects the article DOI: 10.3389/fnagi.2020.603179.].

4.
IEEE Access ; 9: 52796-52811, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33996344

RESUMO

First pass gadolinium-enhanced cardiovascular magnetic resonance (CMR) perfusion imaging allows fully quantitative pixel-wise myocardial blood flow (MBF) assessment, with proven diagnostic value for coronary artery disease. Segmental analysis requires manual segmentation of the myocardium. This work presents a fully automatic method of segmenting the left ventricular myocardium from MBF pixel maps, validated on a retrospective dataset of 247 clinical CMR perfusion studies, each including rest and stress images of three slice locations, performed on a 1.5T scanner. Pixel-wise MBF maps were segmented using an automated pipeline including region growing, edge detection, principal component analysis, and active contours to segment the myocardium, detect key landmarks, and divide the myocardium into sectors appropriate for analysis. Automated segmentation results were compared against a manually defined reference standard using three quantitative metrics: Dice coefficient, Cohen Kappa and myocardial border distance. Sector-wise average MBF and myocardial perfusion reserve (MPR) were compared using Pearson's correlation coefficient and Bland-Altman Plots. The proposed method segmented stress and rest MBF maps of 243 studies automatically. Automated and manual myocardial segmentation had an average (± standard deviation) Dice coefficient of 0.86 ± 0.06, Cohen Kappa of 0.86 ± 0.06, and Euclidian distances of 1.47 ± 0.73 mm and 1.02 ± 0.51 mm for the epicardial and endocardial border, respectively. Automated and manual sector-wise MBF and MPR values correlated with Pearson's coefficient of 0.97 and 0.92, respectively, while Bland-Altman analysis showed bias of 0.01 and 0.07 ml/g/min. The validated method has been integrated with our fully automated MBF pixel mapping pipeline to aid quantitative assessment of myocardial perfusion CMR.

5.
Front Aging Neurosci ; 12: 603179, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33343337

RESUMO

Introduction: The goal of this study was to investigate and compare the classification performance of machine learning with behavioral data from standard neuropsychological tests, a cognitive task, or both. Methods: A neuropsychological battery and a simple 5-min cognitive task were administered to eight individuals with mild cognitive impairment (MCI), eight individuals with mild Alzheimer's disease (AD), and 41 demographically match controls (CN). A fully connected multilayer perceptron (MLP) network and four supervised traditional machine learning algorithms were used. Results: Traditional machine learning algorithms achieved similar classification performances with neuropsychological or cognitive data. MLP outperformed traditional algorithms with the cognitive data (either alone or together with neuropsychological data), but not neuropsychological data. In particularly, MLP with a combination of summarized scores from neuropsychological tests and the cognitive task achieved ~90% sensitivity and ~90% specificity. Applying the models to an independent dataset, in which the participants were demographically different from the ones in the main dataset, a high specificity was maintained (100%), but the sensitivity was dropped to 66.67%. Discussion: Deep learning with data from specific cognitive task(s) holds promise for assisting in the early diagnosis of Alzheimer's disease, but future work with a large and diverse sample is necessary to validate and to improve this approach.

6.
Neurorehabil Neural Repair ; 34(12): 1078-1087, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33150830

RESUMO

BACKGROUND: Wrist-worn accelerometry provides objective monitoring of upper-extremity functional use, such as reaching tasks, but also detects nonfunctional movements, leading to ambiguity in monitoring results. OBJECTIVE: Compare machine learning algorithms with standard methods (counts ratio) to improve accuracy in detecting functional activity. METHODS: Healthy controls and individuals with stroke performed unstructured tasks in a simulated community environment (Test duration = 26 ± 8 minutes) while accelerometry and video were synchronously recorded. Human annotators scored each frame of the video as being functional or nonfunctional activity, providing ground truth. Several machine learning algorithms were developed to separate functional from nonfunctional activity in the accelerometer data. We also calculated the counts ratio, which uses a thresholding scheme to calculate the duration of activity in the paretic limb normalized by the less-affected limb. RESULTS: The counts ratio was not significantly correlated with ground truth and had large errors (r = 0.48; P = .16; average error = 52.7%) because of high levels of nonfunctional movement in the paretic limb. Counts did not increase with increased functional movement. The best-performing intrasubject machine learning algorithm had an accuracy of 92.6% in the paretic limb of stroke patients, and the correlation with ground truth was r = 0.99 (P < .001; average error = 3.9%). The best intersubject model had an accuracy of 74.2% and a correlation of r =0.81 (P = .005; average error = 5.2%) with ground truth. CONCLUSIONS: In our sample, the counts ratio did not accurately reflect functional activity. Machine learning algorithms were more accurate, and future work should focus on the development of a clinical tool.


Assuntos
Acelerometria/normas , Aprendizado de Máquina , Acidente Vascular Cerebral/diagnóstico , Acidente Vascular Cerebral/fisiopatologia , Extremidade Superior/fisiopatologia , Acelerometria/métodos , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reabilitação do Acidente Vascular Cerebral
7.
Comput Biol Med ; 125: 104019, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-33038614

RESUMO

Multi-atlas based segmentation is an effective technique that transforms a representative set of atlas images and labels into a target image for structural segmentation. However, a significant limitation of this approach relates to the fact that the atlas and the target images need to be similar in volume orientation, coverage, or acquisition protocols in order to prevent image misregistration and avoid segmentation fault. In this study, we aim to evaluate the impact of using a heterogeneous Computed Tomography Angiography (CTA) dataset on the performance of a multi-atlas cardiac structure segmentation framework. We propose a generalized technique based upon using the Simple Linear Iterative Clustering (SLIC) supervoxel method to detect a bounding box region enclosing the heart before subsequent cardiac structure segmentation. This technique facilitates our framework to process CTA datasets acquired from distinct imaging protocols and to improve its segmentation accuracy and speed. In a four-way cross comparison based on 60 CTA studies from our institution and 60 CTA datasets from the Multi-Modality Whole Heart Segmentation MICCAI challenge, we show that the proposed framework performs well in segmenting seven different cardiac structures based upon interchangeable atlas and target datasets acquired from different imaging settings. For the overall results, our automated segmentation framework attains a median Dice, mean distance, and Hausdorff distance of 0.88, 1.5 mm, and 9.69 mm over the entire datasets. The average processing time was 1.55 min for both datasets. Furthermore, this study shows that it is feasible to exploit heterogenous datasets from different imaging protocols and institutions for accurate multi-atlas cardiac structure segmentation.


Assuntos
Angiografia , Angiografia por Tomografia Computadorizada , Algoritmos , Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Tórax , Tomografia Computadorizada por Raios X
8.
J Biomed Opt ; 25(9)2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32996300

RESUMO

SIGNIFICANCE: Our study introduces an application of deep learning to virtually generate fluorescence images to reduce the burdens of cost and time from considerable effort in sample preparation related to chemical fixation and staining. AIM: The objective of our work was to determine how successfully deep learning methods perform on fluorescence prediction that depends on structural and/or a functional relationship between input labels and output labels. APPROACH: We present a virtual-fluorescence-staining method based on deep neural networks (VirFluoNet) to transform co-registered images of cells into subcellular compartment-specific molecular fluorescence labels in the same field-of-view. An algorithm based on conditional generative adversarial networks was developed and trained on microscopy datasets from breast-cancer and bone-osteosarcoma cell lines: MDA-MB-231 and U2OS, respectively. Several established performance metrics-the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural-similarity-index (SSIM)-as well as a novel performance metric, the tolerance level, were measured and compared for the same algorithm and input data. RESULTS: For the MDA-MB-231 cells, F-actin signal performed the fluorescent antibody staining of vinculin prediction better than phase-contrast as an input. For the U2OS cells, satisfactory metrics of performance were archieved in comparison with ground truth. MAE is <0.005, 0.017, 0.012; PSNR is >40 / 34 / 33 dB; and SSIM is >0.925 / 0.926 / 0.925 for 4',6-diamidino-2-phenylindole/hoechst, endoplasmic reticulum, and mitochondria prediction, respectively, from channels of nucleoli and cytoplasmic RNA, Golgi plasma membrane, and F-actin. CONCLUSIONS: These findings contribute to the understanding of the utility and limitations of deep learning image-regression to predict fluorescence microscopy datasets of biological cells. We infer that predicted image labels must have either a structural and/or a functional relationship to input labels. Furthermore, the approach introduced here holds promise for modeling the internal spatial relationships between organelles and biomolecules within living cells, leading to detection and quantification of alterations from a standard training dataset.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Imagem Óptica , Organelas , Razão Sinal-Ruído
9.
J Biomed Opt ; 25(2): 1-17, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-32072775

RESUMO

SIGNIFICANCE: We introduce an application of machine learning trained on optical phase features of epithelial and mesenchymal cells to grade cancer cells' morphologies, relevant to evaluation of cancer phenotype in screening assays and clinical biopsies. AIM: Our objective was to determine quantitative epithelial and mesenchymal qualities of breast cancer cells through an unbiased, generalizable, and linear score covering the range of observed morphologies. APPROACH: Digital holographic microscopy was used to generate phase height maps of noncancerous epithelial (Gie-No3B11) and fibroblast (human gingival) cell lines, as well as MDA-MB-231 and MCF-7 breast cancer cell lines. Several machine learning algorithms were evaluated as binary classifiers of the noncancerous cells that graded the cancer cells by transfer learning. RESULTS: Epithelial and mesenchymal cells were classified with 96% to 100% accuracy. Breast cancer cells had scores in between the noncancer scores, indicating both epithelial and mesenchymal morphological qualities. The MCF-7 cells skewed toward epithelial scores, while MDA-MB-231 cells skewed toward mesenchymal scores. Linear support vector machines (SVMs) produced the most distinct score distributions for each cell line. CONCLUSIONS: The proposed epithelial-mesenchymal score, derived from linear SVM learning, is a sensitive and quantitative approach for detecting epithelial and mesenchymal characteristics of unknown cells based on well-characterized cell lines. We establish a framework for rapid and accurate morphological evaluation of single cells and subtle phenotypic shifts in imaged cell populations.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Células Epiteliais/patologia , Fibroblastos/patologia , Holografia/métodos , Aprendizado de Máquina , Células-Tronco Mesenquimais/patologia , Algoritmos , Feminino , Gengiva/citologia , Humanos , Células MCF-7
10.
IEEE Access ; 8: 16187-16202, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33747668

RESUMO

Contrast enhanced cardiac computed tomography angiography (CTA) is a prominent imaging modality for diagnosing cardiovascular diseases non-invasively. It assists the evaluation of the coronary artery patency and provides a comprehensive assessment of structural features of the heart and great vessels. However, physicians are often required to evaluate different cardiac structures and measure their size manually. Such task is very time-consuming and tedious due to the large number of image slices in 3D data. We present a fully automatic method based on a combined multi-atlas and corrective segmentation approach to label the heart and its associated cardiovascular structures. This method also automatically separates other surrounding intrathoracic structures from CTA images. Quantitative assessment of the proposed method is performed on 36 studies with a reference standard obtained from expert manual segmentation of various cardiac structures. Qualitative evaluation is also performed by expert readers to score 120 studies of the automatic segmentation. The quantitative results showed an overall Dice of 0.93, Hausdorff distance of 7.94 mm, and mean surface distance of 1.03 mm between automatically and manually segmented cardiac structures. The visual assessment also attained an excellent score for the automatic segmentation. The average processing time was 2.79 minutes. Our results indicate the proposed automatic framework significantly improves accuracy and computational speed in conventional multi-atlas based approach, and it provides comprehensive and reliable multi-structural segmentation of CTA images that is valuable for clinical application.

11.
J Contam Hydrol ; 211: 26-38, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29606374

RESUMO

In this paper, a method for extraction of the behavior parameters of bacterial migration based on the run and tumble conceptual model is described. The methodology is applied to the microscopic images representing the motile movement of flagellated Azotobacter vinelandii. The bacterial cells are considered to change direction during both runs and tumbles as is evident from the movement trajectories. An unsupervised cluster analysis was performed to fractionate each bacterial trajectory into run and tumble segments, and then the distribution of parameters for each mode were extracted by fitting mathematical distributions best representing the data. A Gaussian copula was used to model the autocorrelation in swimming velocity. For both run and tumble modes, Gamma distribution was found to fit the marginal velocity best, and Logistic distribution was found to represent better the deviation angle than other distributions considered. For the transition rate distribution, log-logistic distribution and log-normal distribution, respectively, was found to do a better job than the traditionally agreed exponential distribution. A model was then developed to mimic the motility behavior of bacteria at the presence of flow. The model was applied to evaluate its ability to describe observed patterns of bacterial deposition on surfaces in a micro-model experiment with an approach velocity of 200 µm/s. It was found that the model can qualitatively reproduce the attachment results of the micro-model setting.


Assuntos
Azotobacter vinelandii/fisiologia , Modelos Teóricos , Análise por Conglomerados , Flagelos/fisiologia , Processamento de Imagem Assistida por Computador , Movimento , Microbiologia do Solo , Processos Estocásticos
12.
Opt Express ; 25(13): 15043-15057, 2017 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-28788938

RESUMO

We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.


Assuntos
Holografia/métodos , Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Microscopia
13.
J Cardiovasc Magn Reson ; 18: 17, 2016 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-27055445

RESUMO

BACKGROUND: Quantitative assessment of myocardial blood flow (MBF) with first-pass perfusion cardiovascular magnetic resonance (CMR) requires a measurement of the arterial input function (AIF). This study presents an automated method to improve the objectivity and reduce processing time for measuring the AIF from first-pass perfusion CMR images. This automated method is used to compare the impact of different AIF measurements on MBF quantification. METHODS: Gadolinium-enhanced perfusion CMR was performed on a 1.5 T scanner using a saturation recovery dual-sequence technique. Rest and stress perfusion series from 270 clinical studies were analyzed. Automated image processing steps included motion correction, intensity correction, detection of the left ventricle (LV), independent component analysis, and LV pixel thresholding to calculate the AIF signal. The results were compared with manual reference measurements using several quality metrics based on the contrast enhancement and timing characteristics of the AIF. The median and 95% confidence interval (CI) of the median were reported. Finally, MBF was calculated and compared in a subset of 21 clinical studies using the automated and manual AIF measurements. RESULTS: Two clinical studies were excluded from the comparison due to a congenital heart defect present in one and a contrast administration issue in the other. The proposed method successfully processed 99.63% of the remaining image series. Manual and automatic AIF time-signal intensity curves were strongly correlated with median correlation coefficient of 0.999 (95% CI [0.999, 0.999]). The automated method effectively selected bright LV pixels, excluded papillary muscles, and required less processing time than the manual approach. There was no significant difference in MBF estimates between manually and automatically measured AIFs (p = NS). However, different sizes of regions of interest selection in the LV cavity could change the AIF measurement and affect MBF calculation (p = NS to p = 0.03). CONCLUSION: The proposed automatic method produced AIFs similar to the reference manual method but required less processing time and was more objective. The automated algorithm may improve AIF measurement from the first-pass perfusion CMR images and make quantitative myocardial perfusion analysis more robust and readily available.


Assuntos
Circulação Coronária , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Imagem de Perfusão do Miocárdio/métodos , Algoritmos , Automação , Meios de Contraste , Gadolínio DTPA , Humanos , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Estudos Retrospectivos , Fluxo de Trabalho
14.
Neuroimage ; 124(Pt B): 1125-1130, 2016 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-26048622

RESUMO

The NIH MRI Study of normal brain development sought to characterize typical brain development in a population of infants, toddlers, children and adolescents/young adults, covering the socio-economic and ethnic diversity of the population of the United States. The study began in 1999 with data collection commencing in 2001 and concluding in 2007. The study was designed with the final goal of providing a controlled-access database; open to qualified researchers and clinicians, which could serve as a powerful tool for elucidating typical brain development and identifying deviations associated with brain-based disorders and diseases, and as a resource for developing computational methods and image processing tools. This paper focuses on the DTI component of the NIH MRI study of normal brain development. In this work, we describe the DTI data acquisition protocols, data processing steps, quality assessment procedures, and data included in the database, along with database access requirements. For more details, visit http://www.pediatricmri.nih.gov. This longitudinal DTI dataset includes raw and processed diffusion data from 498 low resolution (3 mm) DTI datasets from 274 unique subjects, and 193 high resolution (2.5 mm) DTI datasets from 152 unique subjects. Subjects range in age from 10 days (from date of birth) through 22 years. Additionally, a set of age-specific DTI templates are included. This forms one component of the larger NIH MRI study of normal brain development which also includes T1-, T2-, proton density-weighted, and proton magnetic resonance spectroscopy (MRS) imaging data, and demographic, clinical and behavioral data.


Assuntos
Encéfalo/crescimento & desenvolvimento , Imagem de Tensor de Difusão/métodos , Adolescente , Encéfalo/anatomia & histologia , Encefalopatias/patologia , Criança , Pré-Escolar , Bases de Dados Factuais , Etnicidade , Humanos , Processamento de Imagem Assistida por Computador , Lactente , Recém-Nascido , Disseminação de Informação , Estudos Longitudinais , National Institutes of Health (U.S.) , Controle de Qualidade , Valores de Referência , Fatores Socioeconômicos , Estados Unidos , Adulto Jovem
15.
J Microsc ; 260(2): 180-93, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26224257

RESUMO

This paper investigates a postprocessing approach to correct spatial distortion in two-photon fluorescence microscopy images for vascular network reconstruction. It is aimed at in vivo imaging of large field-of-view, deep-tissue studies of vascular structures. Based on simple geometric modelling of the object-of-interest, a distortion function is directly estimated from the image volume by deconvolution analysis. Such distortion function is then applied to subvolumes of the image stack to adaptively adjust for spatially varying distortion and reduce the image blurring through blind deconvolution. The proposed technique was first evaluated in phantom imaging of fluorescent microspheres that are comparable in size to the underlying capillary vascular structures. The effectiveness of restoring three-dimensional (3D) spherical geometry of the microspheres using the estimated distortion function was compared with empirically measured point-spread function. Next, the proposed approach was applied to in vivo vascular imaging of mouse skeletal muscle to reduce the image distortion of the capillary structures. We show that the proposed method effectively improve the image quality and reduce spatially varying distortion that occurs in large field-of-view deep-tissue vascular dataset. The proposed method will help in qualitative interpretation and quantitative analysis of vascular structures from fluorescence microscopy images.


Assuntos
Microscopia de Fluorescência , Microvasos/ultraestrutura , Algoritmos , Animais , Artefatos , Desenho de Equipamento , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Microscopia Intravital/métodos , Camundongos , Microscopia de Fluorescência/instrumentação , Microscopia de Fluorescência/métodos , Modelos Teóricos , Músculo Esquelético/irrigação sanguínea , Músculo Esquelético/ultraestrutura , Imagens de Fantasmas , Fótons , Reprodutibilidade dos Testes
16.
Magn Reson Med ; 68(5): 1654-63, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22287298

RESUMO

Physiological noise artifacts, especially those originating from cardiac pulsation and subject motion, are common in clinical Diffusion tensor-MRI acquisitions. Previous works show that signal perturbations produced by artifacts can be severe and neglecting to account for their contribution can result in erroneous diffusion tensor values. The Robust Estimation of Tensors by Outlier Rejection (RESTORE) method has been shown to be an effective strategy for improving tensor estimation on a voxel-by-voxel basis in the presence of artifactual data points in diffusion-weighted images. In this article, we address potential instabilities that may arise when using RESTORE and propose practical constraints to improve its usability. Moreover, we introduce a method, called informed RESTORE designed to remove physiological noise artifacts in datasets acquired with low redundancy (less than 30-40 diffusion-weighted image volumes)--a condition in which the original RESTORE algorithm may converge to an incorrect solution. This new method is based on the notion that physiological noise is more likely to result in signal dropouts than signal increases. Results from both Monte Carlo simulation and clinical diffusion data indicate that informed RESTORE performs very well in removing physiological noise artifacts for low redundancy diffusion-weighted image datasets.


Assuntos
Algoritmos , Artefatos , Encéfalo/anatomia & histologia , Bases de Dados Factuais , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Razão Sinal-Ruído
17.
Neuroimage ; 54(2): 1168-77, 2011 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-20804850

RESUMO

The goal of this study is to characterize the potential effect of artifacts originating from physiological noise on statistical analysis of diffusion tensor MRI (DTI) data in a population. DTI derived quantities including mean diffusivity (Trace(D)), fractional anisotropy (FA), and principal eigenvector (ε(1)) are computed in the brain of 40 healthy subjects from tensors estimated using two different methods: conventional nonlinear least-squares, and robust fitting (RESTORE). RESTORE identifies artifactual data points as outliers and excludes them on a voxel-by-voxel basis. We found that outlier data points are localized in specific spatial clusters in the population, indicating a consistency in brain regions affected across subjects. In brain parenchyma RESTORE slightly reduces inter-subject variance of FA and Trace(D). The dominant effect of artifacts, however, is bias. Voxel-wise analysis indicates that inclusion of outlier data points results in clusters of under- and over-estimation of FA, while Trace(D) is always over-estimated. Removing outliers affects ε(1) mostly in low anisotropy regions. It was found that brain regions known to be affected by cardiac pulsation - cerebellum and genu of the corpus callosum, as well as regions not previously reported, splenium of the corpus callosum-show significant effects in the population analysis. It is generally assumed that statistical properties of DTI data are homogenous across the brain. This assumption does not appear to be valid based on these results. The use of RESTORE can lead to a more accurate evaluation of a population, and help reduce spurious findings that may occur due to artifacts in DTI data.


Assuntos
Artefatos , Mapeamento Encefálico/métodos , Imagem de Difusão por Ressonância Magnética , Processamento de Imagem Assistida por Computador/métodos , Adulto , Anisotropia , Feminino , Humanos
18.
Magn Reson Med ; 60(2): 496-501, 2008 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-18666108

RESUMO

The longitudinal relaxation time, T(1), can be estimated from two or more spoiled gradient recalled echo images (SPGR) acquired with different flip angles and/or repetition times (TRs). The function relating signal intensity to flip angle and TR is nonlinear; however, a linear form proposed 30 years ago is currently widely used. Here we show that this linear method provides T(1) estimates that have similar precision but lower accuracy than those obtained with a nonlinear method. We also show that T(1) estimated by the linear method is biased due to improper accounting for noise in the fitting. This bias can be significant for clinical SPGR images; for example, T(1) estimated in brain tissue (800 ms < T(1) < 1600 ms) can be overestimated by 10% to 20%. We propose a weighting scheme that correctly accounts for the noise contribution in the fitting procedure. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy of the estimated T(1) from the widely-used linear, the proposed weighted-uncertainty linear, and the nonlinear methods. We show that the linear method with weighted uncertainties reduces the bias of the linear method, providing T(1) estimates comparable in precision and accuracy to those of the nonlinear method while reducing computation time significantly.


Assuntos
Algoritmos , Encéfalo/anatomia & histologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Interpretação Estatística de Dados , Humanos , Análise dos Mínimos Quadrados , Modelos Lineares , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
IEEE Trans Med Imaging ; 27(6): 834-46, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18541490

RESUMO

Diffusion tensor magnetic resonance imaging (DT-MRI) is capable of providing quantitative insights into tissue microstructure in the brain. An important piece of information offered by DT-MRI is the directional preference of diffusing water molecules within a voxel. Building upon this local directional information, DT-MRI tractography attempts to construct global connectivity of white matter tracts. The interplay between local directional information and global structural information is crucial in understanding changes in tissue microstructure as well as in white matter tracts. To this end, the right circular cone of uncertainty was proposed by Basser as a local measure of tract dispersion. Recent experimental observations by Jeong et al. and Lazar et al. that the cones of uncertainty in the brain are mostly elliptical motivate the present study to investigate analytical approaches to quantify their findings. Two analytical approaches for constructing the elliptical cone of uncertainty, based on the first-order matrix perturbation and the error propagation method via diffusion tensor representations, are presented and their theoretical equivalence is established. We propose two normalized measures, circumferential and areal, to quantify the uncertainty of the major eigenvector of the diffusion tensor. We also describe a new technique of visualizing the cone of uncertainty in 3-D.


Assuntos
Algoritmos , Encéfalo/anatomia & histologia , Imagem de Difusão por Ressonância Magnética/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imagem de Difusão por Ressonância Magnética/normas , Humanos , Valores de Referência , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
IEEE Trans Med Imaging ; 26(11): 1576-84, 2007 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-18041272

RESUMO

One aim of this work is to investigate the feasibility of using a hierarchy of models to describe diffusion tensor magnetic resonance (MR) data in fixed tissue. Parsimonious model selection criteria are used to choose among different models of diffusion within tissue. Using this information, we assess whether we can perform simultaneous tissue segmentation and classification. Both numerical phantoms and diffusion weighted imaging (DWI) data obtained from excised pig spinal cord are used to test and validate this model selection framework. Three hierarchical approaches are used for parsimonious model selection: the Schwarz criterion (SC), the F-test t-test (F- t), proposed by Hext, and the F-test F-test (F-F), adapted from Snedecor. The F - t approach is more robust than the others for selecting between isotropic and general anisotropic (full tensor) models. However, due to its high sensitivity to the variance estimate and bias in sorting eigenvalues, the F-F and SC are preferred for segmenting models with transverse isotropy (cylindrical symmetry). Additionally, the SC method is easier to implement than the F - t and F - F methods and has better performance. As such, this approach can be efficiently used for evaluating large MRI data sets. In addition, the proposed voxel-by-voxel segmentation framework is not susceptible to artifacts caused by the inhomogeneity of the variance in neighboring voxels with different degrees of anisotropy, which might contaminate segmentation results obtained with the techniques based on voxel averaging.


Assuntos
Imagem de Difusão por Ressonância Magnética/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Medula Espinal/anatomia & histologia , Algoritmos , Animais , Inteligência Artificial , Técnicas In Vitro , Modelos Neurológicos , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Suínos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA