Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
BMC Med Imaging ; 23(1): 148, 2023 10 02.
Artigo em Inglês | MEDLINE | ID: mdl-37784039

RESUMO

PURPOSE: During the acquisition of MRI data, patient-, sequence-, or hardware-related factors can introduce artefacts that degrade image quality. Four of the most significant tasks for improving MRI image quality have been bias field correction, super-resolution, motion-, and noise correction. Machine learning has achieved outstanding results in improving MR image quality for these tasks individually, yet multi-task methods are rarely explored. METHODS: In this study, we developed a model to simultaneously correct for all four aforementioned artefacts using multi-task learning. Two different datasets were collected, one consisting of brain scans while the other pelvic scans, which were used to train separate models, implementing their corresponding artefact augmentations. Additionally, we explored a novel loss function that does not only aim to reconstruct the individual pixel values, but also the image gradients, to produce sharper, more realistic results. The difference between the evaluated methods was tested for significance using a Friedman test of equivalence followed by a Nemenyi post-hoc test. RESULTS: Our proposed model generally outperformed other commonly-used correction methods for individual artefacts, consistently achieving equal or superior results in at least one of the evaluation metrics. For images with multiple simultaneous artefacts, we show that the performance of using a combination of models, trained to correct individual artefacts depends heavily on the order that they were applied. This is not an issue for our proposed multi-task model. The model trained using our novel convolutional loss function always outperformed the model trained with a mean squared error loss, when evaluated using Visual Information Fidelity, a quality metric connected to perceptual quality. CONCLUSION: We trained two models for multi-task MRI artefact correction of brain, and pelvic scans. We used a novel loss function that significantly improves the image quality of the outputs over using mean squared error. The approach performs well on real world data, and it provides insight into which artefacts it detects and corrects for. Our proposed model and source code were made publicly available.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Aprendizado de Máquina , Neuroimagem , Software , Artefatos
2.
Z Med Phys ; 2023 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-37537099

RESUMO

The use of synthetic CT (sCT) in the radiotherapy workflow would reduce costs and scan time while removing the uncertainties around working with both MR and CT modalities. The performance of deep learning (DL) solutions for sCT generation is steadily increasing, however most proposed methods were trained and validated on private datasets of a single contrast from a single scanner. Such solutions might not perform equally well on other datasets, limiting their general usability and therefore value. Additionally, functional evaluations of sCTs such as dosimetric comparisons with CT-based dose calculations better show the impact of the methods, but the evaluations are more labor intensive than pixel-wise metrics. To improve the generalization of an sCT model, we propose to incorporate a pre-trained DL model to pre-process the input MR images by generating artificial proton density, T1 and T2 maps (i.e. contrast-independent quantitative maps), which are then used for sCT generation. Using a dataset of only T2w MR images, the robustness towards input MR contrasts of this approach is compared to a model that was trained using the MR images directly. We evaluate the generated sCTs using pixel-wise metrics and calculating mean radiological depths, as an approximation of the mean delivered dose. On T2w images acquired with the same settings as the training dataset, there was no significant difference between the performance of the models. However, when evaluated on T1w images, and a wide range of other contrasts and scanners from both public and private datasets, our approach outperforms the baseline model. Using a dataset of T2w MR images, our proposed model implements synthetic quantitative maps to generate sCT images, improving the generalization towards other contrasts. Our code and trained models are publicly available.

3.
Magn Reson Med ; 90(6): 2557-2571, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37582257

RESUMO

PURPOSE: To mitigate the problem of noisy parameter maps with high uncertainties by casting parameter mapping as a denoising task based on Deep Image Priors. METHODS: We extend the concept of denoising with Deep Image Prior (DIP) into parameter mapping by treating the output of an image-generating network as a parametrization of tissue parameter maps. The method implicitly denoises the parameter mapping process by filtering low-level image features with an untrained convolutional neural network (CNN). Our implementation includes uncertainty estimation from Bernoulli approximate variational inference, implemented with MC dropout, which provides model uncertainty in each voxel of the denoised parameter maps. The method is modular, so the specifics of different applications (e.g., T1 mapping) separate into application-specific signal equation blocks. We evaluate the method on variable flip angle T1 mapping, multi-echo T2 mapping, and apparent diffusion coefficient mapping. RESULTS: We found that deep image prior adapts successfully to several applications in parameter mapping. In all evaluations, the method produces noise-reduced parameter maps with decreased uncertainty compared to conventional methods. The downsides of the proposed method are the long computational time and the introduction of some bias from the denoising prior. CONCLUSION: DIP successfully denoise the parameter mapping process and applies to several applications with limited hyperparameter tuning. Further, it is easy to implement since DIP methods do not use network training data. Although time-consuming, uncertainty information from MC dropout makes the method more robust and provides useful information when properly calibrated.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Incerteza , Teorema de Bayes , Razão Sinal-Ruído
4.
Elife ; 122023 01 18.
Artigo em Inglês | MEDLINE | ID: mdl-36651724

RESUMO

Recent developments in deep learning, coupled with an increasing number of sequenced proteins, have led to a breakthrough in life science applications, in particular in protein property prediction. There is hope that deep learning can close the gap between the number of sequenced proteins and proteins with known properties based on lab experiments. Language models from the field of natural language processing have gained popularity for protein property predictions and have led to a new computational revolution in biology, where old prediction results are being improved regularly. Such models can learn useful multipurpose representations of proteins from large open repositories of protein sequences and can be used, for instance, to predict protein properties. The field of natural language processing is growing quickly because of developments in a class of models based on a particular model-the Transformer model. We review recent developments and the use of large-scale Transformer models in applications for predicting protein characteristics and how such models can be used to predict, for example, post-translational modifications. We review shortcomings of other deep learning models and explain how the Transformer models have quickly proven to be a very promising way to unravel information hidden in the sequences of amino acids.


Assuntos
Disciplinas das Ciências Biológicas , Aprendizado Profundo , Sequência de Aminoácidos , Aminoácidos , Idioma
5.
IEEE Trans Med Imaging ; 41(6): 1320-1330, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34965206

RESUMO

In the last years, deep learning has dramatically improved the performances in a variety of medical image analysis applications. Among different types of deep learning models, convolutional neural networks have been among the most successful and they have been used in many applications in medical imaging. Training deep convolutional neural networks often requires large amounts of image data to generalize well to new unseen images. It is often time-consuming and expensive to collect large amounts of data in the medical image domain due to expensive imaging systems, and the need for experts to manually make ground truth annotations. A potential problem arises if new structures are added when a decision support system is already deployed and in use. Since the field of radiation therapy is constantly developing, the new structures would also have to be covered by the decision support system. In the present work, we propose a novel loss function to solve multiple problems: imbalanced datasets, partially-labeled data, and incremental learning. The proposed loss function adapts to the available data in order to utilize all available data, even when some have missing annotations. We demonstrate that the proposed loss function also works well in an incremental learning setting, where an existing model is easily adapted to semi-automatically incorporate delineations of new organs when they appear. Experiments on a large in-house dataset show that the proposed method performs on par with baseline models, while greatly reducing the training time and eliminating the hassle of maintaining multiple models in practice.


Assuntos
Aprendizado Profundo , Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Semântica
6.
Artigo em Inglês | MEDLINE | ID: mdl-36998700

RESUMO

Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.

7.
Med Phys ; 47(12): 6216-6231, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33169365

RESUMO

PURPOSE: When using convolutional neural networks (CNNs) for segmentation of organs and lesions in medical images, the conventional approach is to work with inputs and outputs either as single slice [two-dimensional (2D)] or whole volumes [three-dimensional (3D)]. One common alternative, in this study denoted as pseudo-3D, is to use a stack of adjacent slices as input and produce a prediction for at least the central slice. This approach gives the network the possibility to capture 3D spatial information, with only a minor additional computational cost. METHODS: In this study, we systematically evaluate the segmentation performance and computational costs of this pseudo-3D approach as a function of the number of input slices, and compare the results to conventional end-to-end 2D and 3D CNNs, and to triplanar orthogonal 2D CNNs. The standard pseudo-3D method regards the neighboring slices as multiple input image channels. We additionally design and evaluate a novel, simple approach where the input stack is a volumetric input that is repeatably convolved in 3D to obtain a 2D feature map. This 2D map is in turn fed into a standard 2D network. We conducted experiments using two different CNN backbone architectures and on eight diverse data sets covering different anatomical regions, imaging modalities, and segmentation tasks. RESULTS: We found that while both pseudo-3D methods can process a large number of slices at once and still be computationally much more efficient than fully 3D CNNs, a significant improvement over a regular 2D CNN was only observed with two of the eight data sets. triplanar networks had the poorest performance of all the evaluated models. An analysis of the structural properties of the segmentation masks revealed no relations to the segmentation performance with respect to the number of input slices. A post hoc rank sum test which combined all metrics and data sets yielded that only our newly proposed pseudo-3D method with an input size of 13 slices outperformed almost all methods. CONCLUSION: In the general case, multislice inputs appear not to improve segmentation results over using 2D or 3D CNNs. For the particular case of 13 input slices, the proposed novel pseudo-3D method does appear to have a slight advantage across all data sets compared to all other methods evaluated in this work.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Imageamento Tridimensional
8.
Phys Med Biol ; 65(22): 225036, 2020 11 24.
Artigo em Inglês | MEDLINE | ID: mdl-32947277

RESUMO

PURPOSE: To develop a method that can reduce and estimate uncertainty in quantitative MR parameter maps without the need for hand-tuning of any hyperparameters. METHODS: We present an estimation method where uncertainties are reduced by incorporating information on spatial correlations between neighbouring voxels. The method is based on a Bayesian hierarchical non-linear regression model, where the parameters of interest are sampled, using Markov chain Monte Carlo (MCMC), from a high-dimensional posterior distribution with a spatial prior. The degree to which the prior affects the model is determined by an automatic hyperparameter search using an information criterion and is, therefore, free from manual user-dependent tuning. The samples obtained further provide a convenient means to obtain uncertainties in both voxels and regions. The developed method was evaluated on T 1 estimations based on the variable flip angle method. RESULTS: The proposed method delivers noise-reduced T 1 parameter maps with associated error estimates by combining MCMC sampling, the widely applicable information criterion, and total variation-based denoising. The proposed method results in an overall decrease in estimation error when compared to conventional voxel-wise maximum likelihood estimation. However, this comes with an increased bias in some regions, predominately at tissue interfaces, as well as an increase in computational time. CONCLUSIONS: This study provides a method that generates more precise estimates compared to the conventional method, without incorporating user subjectivity, and with the added benefit of uncertainty estimation.


Assuntos
Aumento da Imagem/métodos , Imageamento por Ressonância Magnética , Dinâmica não Linear , Razão Sinal-Ruído , Algoritmos , Teorema de Bayes , Cadeias de Markov , Método de Monte Carlo , Incerteza
9.
Z Med Phys ; 30(4): 305-314, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32564924

RESUMO

INTRODUCTION: This paper explores the potential of the StyleGAN model as an high-resolution image generator for synthetic medical images. The possibility to generate sample patient images of different modalities can be helpful for training deep learning algorithms as e.g. a data augmentation technique. METHODS: The StyleGAN model was trained on Computed Tomography (CT) and T2- weighted Magnetic Resonance (MR) images from 100 patients with pelvic malignancies. The resulting model was investigated with regards to three features: Image Modality, Sex, and Longitudinal Slice Position. Further, the style transfer feature of the StyleGAN was used to move images between the modalities. The root-mean-squard error (RMSE) and the Mean Absolute Error (MAE) were used to quantify errors for MR and CT, respectively. RESULTS: We demonstrate how these features can be transformed by manipulating the latent style vectors, and attempt to quantify how the errors change as we move through the latent style space. The best results were achieved by using the style transfer feature of the StyleGAN (58.7 HU MAE for MR to CT and 0.339 RMSE for CT to MR). Slices below and above an initial central slice can be predicted with an error below 75 HU MAE and 0.3 RMSE within 4cm for CT and MR, respectively. DISCUSSION: The StyleGAN is a promising model to use for generating synthetic medical images for MR and CT modalities as well as for 3D volumes.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Razão Sinal-Ruído
10.
Phys Med Biol ; 65(10): 105004, 2020 05 22.
Artigo em Inglês | MEDLINE | ID: mdl-32235074

RESUMO

Recent developments in magnetic resonance (MR) to synthetic computed tomography (sCT) conversion have shown that treatment planning is possible without an initial planning CT. Promising conversion results have been demonstrated recently using conditional generative adversarial networks (cGANs). However, the performance is generally only tested on images from one MR scanner, which neglects the potential of neural networks to find general high-level abstract features. In this study, we explored the generalizability of the generator models, trained on a single field strength scanner, to data acquired with higher field strengths. T2-weighted 0.35T MRIs and CTs from 51 patients treated for prostate (40) and cervical cancer (11) were included. 25 of them were used to train four different generators (SE-ResNet, DenseNet, U-Net, and Embedded Net). Further, an ensemble model was created from the four network outputs. The models were validated on 16 patients from a 0.35T MR scanner. Further, the trained models were tested on the Gold Atlas dataset, containing T2-weighted MR scans of different field strengths; 1.5T(7) and 3T(12), and 10 patients from the 0.35T scanner. The sCTs were dosimetrically compared using clinical VMAT plans for all test patients. For the same scanner (0.35T), the results from the different models were comparable on the test set, with only minor differences in the mean absolute error (MAE) (35-51HU body). Similar results were obtained for conversions of 3T GE Signa and the 3T GE Discovery images (40-62HU MAE) for three of the models. However, larger differences were observed for the 1.5T images (48-65HU MAE). The overall best model was found to be the ensemble model. All dose differences were below 1%. This study shows that it is possible to generalize models trained on images of one scanner to other scanners and different field strengths. The best metric results were achieved by the combination of all networks.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/instrumentação , Tomografia Computadorizada por Raios X , Humanos , Masculino , Redes Neurais de Computação , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Radiometria , Planejamento da Radioterapia Assistida por Computador , Radioterapia de Intensidade Modulada
11.
IEEE Trans Med Imaging ; 39(9): 2856-2868, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32149682

RESUMO

Deep learning methods have proven extremely effective at performing a variety of medical image analysis tasks. With their potential use in clinical routine, their lack of transparency has however been one of their few weak points, raising concerns regarding their behavior and failure modes. While most research to infer model behavior has focused on indirect strategies that estimate prediction uncertainties and visualize model support in the input image space, the ability to explicitly query a prediction model regarding its image content offers a more direct way to determine the behavior of trained models. To this end, we present a novel Visual Question Answering approach that allows an image to be queried by means of a written question. Experiments on a variety of medical and natural image datasets show that by fusing image and question features in a novel way, the proposed approach achieves an equal or higher accuracy compared to current methods.


Assuntos
Diagnóstico por Imagem , Radiografia
12.
PLoS One ; 14(3): e0211463, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30865639

RESUMO

We propose a new sparsification method for the singular value decomposition-called the constrained singular value decomposition (CSVD)-that can incorporate multiple constraints such as sparsification and orthogonality for the left and right singular vectors. The CSVD can combine different constraints because it implements each constraint as a projection onto a convex set, and because it integrates these constraints as projections onto the intersection of multiple convex sets. We show that, with appropriate sparsification constants, the algorithm is guaranteed to converge to a stable point. We also propose and analyze the convergence of an efficient algorithm for the specific case of the projection onto the balls defined by the norms L1 and L2. We illustrate the CSVD and compare it to the standard singular value decomposition and to a non-orthogonal related sparsification method with: 1) a simulated example, 2) a small set of face images (corresponding to a configuration with a number of variables much larger than the number of observations), and 3) a psychometric application with a large number of observations and a small number of variables. The companion R-package, csvd, that implements the algorithms described in this paper, along with reproducible examples, are available for download from https://github.com/vguillemot/csvd.


Assuntos
Algoritmos , Interpretação Estatística de Dados , Simulação por Computador , Bases de Dados Factuais/estatística & dados numéricos , Face/anatomia & histologia , Feminino , Humanos , Imaginação , Masculino , Modelos Estatísticos , Análise Multivariada , Reconhecimento Automatizado de Padrão/estatística & dados numéricos , Análise de Componente Principal , Psicometria/estatística & dados numéricos
13.
PLoS One ; 14(2): e0212110, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30794577

RESUMO

Haralick texture features are common texture descriptors in image analysis. To compute the Haralick features, the image gray-levels are reduced, a process called quantization. The resulting features depend heavily on the quantization step, so Haralick features are not reproducible unless the same quantization is performed. The aim of this work was to develop Haralick features that are invariant to the number of quantization gray-levels. By redefining the gray-level co-occurrence matrix (GLCM) as a discretized probability density function, it becomes asymptotically invariant to the quantization. The invariant and original features were compared using logistic regression classification to separate two classes based on the texture features. Classifiers trained on the invariant features showed higher accuracies, and had similar performance when training and test images had very different quantizations. In conclusion, using the invariant Haralick features, an image pattern will give the same texture feature values independent of image quantization.


Assuntos
Processamento de Imagem Assistida por Computador , Algoritmos , Cor , Teoria da Densidade Funcional , Reconhecimento Automatizado de Padrão
14.
Phys Med Biol ; 63(19): 195017, 2018 10 02.
Artigo em Inglês | MEDLINE | ID: mdl-30088815

RESUMO

The Haralick texture features are common in the image analysis literature, partly because of their simplicity and because their values can be interpreted. It was recently observed that the Haralick texture features are very sensitive to the size of the GLCM that was used to compute them, which led to a new formulation that is invariant to the GLCM size. However, these new features still depend on the sample size used to compute the GLCM, i.e. the size of the input image region-of-interest (ROI). The purpose of this work was to investigate the performance of density estimation methods for approximating the GLCM and subsequently the corresponding invariant features. Three density estimation methods were evaluated, namely a piece-wise constant distribution, the Parzen-windows method, and the Gaussian mixture model. The methods were evaluated on 29 different image textures and 20 invariant Haralick texture features as well as a wide range of different ROI sizes. The results indicate that there are two types of features: those that have a clear minimum error for a particular GLCM size for each ROI size, and those whose error decreases monotonically with increased GLCM size. For the first type of features, the Gaussian mixture model gave the smallest errors, and in particular for small ROI sizes (less than about [Formula: see text]). In conclusion, the Gaussian mixture model is the preferred method for the first type of features (in particular for small ROIs). For the second type of features, simply using a large GLCM size is preferred.


Assuntos
Diagnóstico por Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Diagnóstico por Imagem/normas , Humanos , Processamento de Imagem Assistida por Computador/normas
15.
IEEE Trans Med Imaging ; 37(11): 2403-2413, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-29993684

RESUMO

Predictive models can be used on high-dimensional brain images to decode cognitive states or diagnosis/prognosis of a clinical condition/evolution. Spatial regularization through structured sparsity offers new perspectives in this context and reduces the risk of overfitting the model while providing interpretable neuroimaging signatures by forcing the solution to adhere to domain-specific constraints. Total variation (TV) is a promising candidate for structured penalization: it enforces spatial smoothness of the solution while segmenting predictive regions from the background. We consider the problem of minimizing the sum of a smooth convex loss, a non-smooth convex penalty (whose proximal operator is known) and a wide range of possible complex, non-smooth convex structured penalties such as TV or overlapping group Lasso. Existing solvers are either limited in the functions they can minimize or in their practical capacity to scale to high-dimensional imaging data. Nesterov's smoothing technique can be used to minimize a large number of non-smooth convex structured penalties. However, reasonable precision requires a small smoothing parameter, which slows down the convergence speed to unacceptable levels. To benefit from the versatility of Nesterov's smoothing technique, we propose a first order continuation algorithm, CONESTA, which automatically generates a sequence of decreasing smoothing parameters. The generated sequence maintains the optimal convergence speed toward any globally desired precision. Our main contributions are: gap to probe the current distance to the global optimum in order to adapt the smoothing parameter and the To propose an expression of the duality convergence speed. This expression is applicable to many penalties and can be used with other solvers than CONESTA. We also propose an expression for the particular smoothing parameter that minimizes the number of iterations required to reach a given precision. Furthermore, we provide a convergence proof and its rate, which is an improvement over classical proximal gradient smoothing methods. We demonstrate on both simulated and high-dimensional structural neuroimaging data that CONESTA significantly outperforms many state-of-the-art solvers in regard to convergence speed and precision.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Neuroimagem/métodos , Encéfalo/diagnóstico por imagem , Estudos de Casos e Controles , Disfunção Cognitiva/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Análise de Regressão
16.
Hum Brain Mapp ; 39(4): 1777-1788, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29341341

RESUMO

Despite significant progress in the field, the detection of fMRI signal changes during hallucinatory events remains difficult and time-consuming. This article first proposes a machine-learning algorithm to automatically identify resting-state fMRI periods that precede hallucinations versus periods that do not. When applied to whole-brain fMRI data, state-of-the-art classification methods, such as support vector machines (SVM), yield dense solutions that are difficult to interpret. We proposed to extend the existing sparse classification methods by taking the spatial structure of brain images into account with structured sparsity using the total variation penalty. Based on this approach, we obtained reliable classifying performances associated with interpretable predictive patterns, composed of two clearly identifiable clusters in speech-related brain regions. The variation in transition-to-hallucination functional patterns not only from one patient to another but also from one occurrence to the next (e.g., also depending on the sensory modalities involved) appeared to be the major difficulty when developing effective classifiers. Consequently, second, this article aimed to characterize the variability within the prehallucination patterns using an extension of principal component analysis with spatial constraints. The principal components (PCs) and the associated basis patterns shed light on the intrinsic structures of the variability present in the dataset. Such results are promising in the scope of innovative fMRI-guided therapy for drug-resistant hallucinations, such as fMRI-based neurofeedback.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/diagnóstico por imagem , Alucinações/diagnóstico por imagem , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Esquizofrenia/diagnóstico por imagem , Adulto , Percepção Auditiva/fisiologia , Encéfalo/fisiopatologia , Feminino , Alucinações/fisiopatologia , Humanos , Masculino , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiopatologia , Neurorretroalimentação , Reconhecimento Automatizado de Padrão/métodos , Análise de Componente Principal , Esquizofrenia/fisiopatologia
17.
Magn Reson Med ; 79(1): 561-567, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28349618

RESUMO

PURPOSE: The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. METHOD: To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. RESULTS: The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. CONCLUSION: The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Algoritmos , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Calibragem , Simulação por Computador , Meios de Contraste/química , Imagem de Difusão por Ressonância Magnética , Humanos , Análise dos Mínimos Quadrados , Distribuição Normal , Razão Sinal-Ruído
18.
IEEE Trans Med Imaging ; 37(2): 396-407, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28880163

RESUMO

Principal component analysis (PCA) is an exploratory tool widely used in data analysis to uncover the dominant patterns of variability within a population. Despite its ability to represent a data set in a low-dimensional space, PCA's interpretability remains limited. Indeed, the components produced by PCA are often noisy or exhibit no visually meaningful patterns. Furthermore, the fact that the components are usually non-sparse may also impede interpretation, unless arbitrary thresholding is applied. However, in neuroimaging, it is essential to uncover clinically interpretable phenotypic markers that would account for the main variability in the brain images of a population. Recently, some alternatives to the standard PCA approach, such as sparse PCA (SPCA), have been proposed, their aim being to limit the density of the components. Nonetheless, sparsity alone does not entirely solve the interpretability problem in neuroimaging, since it may yield scattered and unstable components. We hypothesized that the incorporation of prior information regarding the structure of the data may lead to improved relevance and interpretability of brain patterns. We therefore present a simple extension of the popular PCA framework that adds structured sparsity penalties on the loading vectors in order to identify the few stable regions in the brain images that capture most of the variability. Such structured sparsity can be obtained by combining, e.g., and total variation (TV) penalties, where the TV regularization encodes information on the underlying structure of the data. This paper presents the structured SPCA (denoted SPCA-TV) optimization framework and its resolution. We demonstrate SPCA-TV's effectiveness and versatility on three different data sets. It can be applied to any kind of structured data, such as, e.g., -dimensional array images or meshes of cortical surfaces. The gains of SPCA-TV over unstructured approaches (such as SPCA and ElasticNet PCA) or structured approach (such as GraphNet PCA) are significant, since SPCA-TV reveals the variability within a data set in the form of intelligible brain patterns that are easier to interpret and more stable across different samples.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Análise de Componente Principal/métodos , Algoritmos , Encéfalo/diagnóstico por imagem , Humanos , Neuroimagem , Aprendizado de Máquina não Supervisionado
19.
BMC Genomics ; 14: 893, 2013 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-24341908

RESUMO

BACKGROUND: Reactive oxygen species (ROS) are involved in the regulation of diverse physiological processes in plants, including various biotic and abiotic stress responses. Thus, oxidative stress tolerance mechanisms in plants are complex, and diverse responses at multiple levels need to be characterized in order to understand them. Here we present system responses to oxidative stress in Populus by integrating data from analyses of the cambial region of wild-type controls and plants expressing high-isoelectric-point superoxide dismutase (hipI-SOD) transcripts in antisense orientation showing a higher production of superoxide. The cambium, a thin cell layer, generates cells that differentiate to form either phloem or xylem and is hypothesized to be a major reason for phenotypic perturbations in the transgenic plants. Data from multiple platforms including transcriptomics (microarray analysis), proteomics (UPLC/QTOF-MS), and metabolomics (GC-TOF/MS, UPLC/MS, and UHPLC-LTQ/MS) were integrated using the most recent development of orthogonal projections to latent structures called OnPLS. OnPLS is a symmetrical multi-block method that does not depend on the order of analysis when more than two blocks are analysed. Significantly affected genes, proteins and metabolites were then visualized in painted pathway diagrams. RESULTS: The main categories that appear to be significantly influenced in the transgenic plants were pathways related to redox regulation, carbon metabolism and protein degradation, e.g. the glycolysis and pentose phosphate pathways (PPP). The results provide system-level information on ROS metabolism and responses to oxidative stress, and indicate that some initial responses to oxidative stress may share common pathways. CONCLUSION: The proposed data evaluation strategy shows an efficient way of compiling complex, multi-platform datasets to obtain significant biological information.


Assuntos
Câmbio/metabolismo , Estresse Oxidativo , Populus/genética , Regulação da Expressão Gênica de Plantas , Redes e Vias Metabólicas , Metaboloma , Análise Multivariada , Plantas Geneticamente Modificadas/genética , Plantas Geneticamente Modificadas/metabolismo , Populus/metabolismo , Proteoma , Espécies Reativas de Oxigênio/metabolismo , Superóxido Dismutase/genética , Superóxido Dismutase/metabolismo , Biologia de Sistemas , Transcriptoma
20.
Anal Chim Acta ; 791: 13-24, 2013 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-23890602

RESUMO

OnPLS is an extension of O2PLS that decomposes a set of matrices, in either multiblock or path model analysis, such that each matrix consists of two parts: a globally joint part containing variation shared with all other connected matrices, and a part that contains locally joint and unique variation, i.e. variation that is shared with some, but not all, other connected matrices or that is unique in a single matrix. A further extension of OnPLS suggested here decomposes the part that is not globally joint into locally joint and unique parts. To achieve this it uses the OnPLS method to first find and extract a globally joint model, and then applies OnPLS recursively to subsets of matrices that contain the locally joint and unique variation remaining after the globally joint variation has been extracted. This results in a set of locally joint models. The variation that is left after the globally joint and locally joint variation has been extracted is (by construction) not related to the other matrices and thus represents the strictly unique variation in each matrix. The method's utility is demonstrated by its application to both a simulated data set and a real data set acquired from metabolomic, proteomic and transcriptomic profiling of three genotypes of hybrid aspen. The results show that OnPLS can successfully decompose each matrix into global, local and unique models, resulting in lower numbers of globally joint components and higher intercorrelations of scores. OnPLS also increases the interpretability of models of connected matrices, because of the locally joint and unique models it generates.


Assuntos
Modelos Teóricos , Metaboloma , Proteômica , Transcriptoma
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA