Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
Add more filters











Publication year range
1.
Front Hum Neurosci ; 17: 1134012, 2023.
Article in English | MEDLINE | ID: mdl-37497043

ABSTRACT

Whole-brain functional connectivity (FC) measured with functional MRI (fMRI) evolves over time in meaningful ways at temporal scales going from years (e.g., development) to seconds [e.g., within-scan time-varying FC (tvFC)]. Yet, our ability to explore tvFC is severely constrained by its large dimensionality (several thousands). To overcome this difficulty, researchers often seek to generate low dimensional representations (e.g., 2D and 3D scatter plots) hoping those will retain important aspects of the data (e.g., relationships to behavior and disease progression). Limited prior empirical work suggests that manifold learning techniques (MLTs)-namely those seeking to infer a low dimensional non-linear surface (i.e., the manifold) where most of the data lies-are good candidates for accomplishing this task. Here we explore this possibility in detail. First, we discuss why one should expect tvFC data to lie on a low dimensional manifold. Second, we estimate what is the intrinsic dimension (ID; i.e., minimum number of latent dimensions) of tvFC data manifolds. Third, we describe the inner workings of three state-of-the-art MLTs: Laplacian Eigenmaps (LEs), T-distributed Stochastic Neighbor Embedding (T-SNE), and Uniform Manifold Approximation and Projection (UMAP). For each method, we empirically evaluate its ability to generate neuro-biologically meaningful representations of tvFC data, as well as their robustness against hyper-parameter selection. Our results show that tvFC data has an ID that ranges between 4 and 26, and that ID varies significantly between rest and task states. We also show how all three methods can effectively capture subject identity and task being performed: UMAP and T-SNE can capture these two levels of detail concurrently, but LE could only capture one at a time. We observed substantial variability in embedding quality across MLTs, and within-MLT as a function of hyper-parameter selection. To help alleviate this issue, we provide heuristics that can inform future studies. Finally, we also demonstrate the importance of feature normalization when combining data across subjects and the role that temporal autocorrelation plays in the application of MLTs to tvFC data. Overall, we conclude that while MLTs can be useful to generate summary views of labeled tvFC data, their application to unlabeled data such as resting-state remains challenging.

2.
Physiol Meas ; 44(7)2023 07 24.
Article in English | MEDLINE | ID: mdl-37336241

ABSTRACT

Background.The analysis of multi-lead electrocardiographic (ECG) signals requires integrating the information derived from each lead to reach clinically relevant conclusions. This analysis could benefit from data-driven methods compacting the information in those leads into lower-dimensional representations (i.e. 2 or 3 dimensions instead of 12).Objective.We propose Laplacian Eigenmaps (LE) to create a unified framework where ECGs from different subjects can be compared and their abnormalities are enhanced.Approach.We conceive a normal reference ECG space based on LE, calculated using signals of healthy subjects in sinus rhythm. Signals from new subjects can be mapped onto this reference space creating a loop per heartbeat that captures ECG abnormalities. A set of parameters, based on distance metrics and on the shape of loops, are proposed to quantify the differences between subjects.Main results.This methodology was applied to find structural and arrhythmogenic changes in the ECG. The LE framework consistently captured the characteristics of healthy ECGs, confirming that normal signals behaved similarly in the LE space. Significant differences between normal signals, and those from patients with ischemic heart disease or dilated cardiomyopathy were detected. In contrast, LE biomarkers did not identify differences between patients with cardiomyopathy and a history of ventricular arrhythmia and their matched controls.Significance.This LE unified framework offers a new representation of multi-lead signals, reducing dimensionality while enhancing imperceptible abnormalities and enabling the comparison of signals of different subjects.


Subject(s)
Electrocardiography , Myocardial Ischemia , Humans , Electrocardiography/methods , Arrhythmias, Cardiac , Heart Rate
3.
Spectrochim Acta A Mol Biomol Spectrosc ; 289: 122247, 2023 Mar 15.
Article in English | MEDLINE | ID: mdl-36549073

ABSTRACT

The dimensionality of near-infrared (NIR) spectral data is often extremely large. Dimensionality reduction of spectral data can effectively reduce the redundant information and correlation between spectral variables and simplify the model, which is crucial to increasing the model's performance. As a nonlinear feature extraction method, Laplacian Eigenmaps (LE) may preserve the local neighborhood information of the dataset, has high robustness, and is simple to compute. However, when the LE algorithm maps the data from high-dimensional space to low-dimensional space, it is often disturbed by irrelevant information and multicollinearity in the spectral data, which lowers the model's prediction performance. Random Frog (RF) algorithm can eliminate noise and collinearity in the spectrum. Therefore, before using the LE algorithm, we use the RF algorithm to eliminate irrelevant information in the spectrum and reduce the correlation between the spectra variables to increase the efficiency of the LE algorithm. We used the RF + LE algorithm to reduce the dimensionality of two public NIRS datasets (soil datasets and pharmaceutical tablets datasets) and compared it with RF and LE algorithms alone. We utilized Partial Least Squares Regression (PLSR) and Support Vector Regression (SVR) to establish regression models. The experimental findings demonstrate that compared with the RF algorithm and LE algorithm, the RF + LE combination method can reduce the dimension of spectral variables and model complexity, and improve regression models' prediction accuracy and stability. It is an effective dimensionality reduction method for the near-infrared spectrum.


Subject(s)
Algorithms , Spectroscopy, Near-Infrared , Spectroscopy, Near-Infrared/methods , Least-Squares Analysis , Soil
4.
Spectrochim Acta A Mol Biomol Spectrosc ; 282: 121630, 2022 Dec 05.
Article in English | MEDLINE | ID: mdl-35944402

ABSTRACT

Laplacian Eigenmaps is a nonlinear dimensionality reduction algorithm based on graph theory. The algorithm adopted the Gaussian function to measure the affinity between a pair of points in the adjacency graph. However, the scaling parameter σ in the Gaussian function is a hyper-parameter tuned empirically. Once the value of σ is determined and fixed, the weight between two points depends wholly on the Euclidian distance between them, which is not suitable for multi-scale sample sets. To optimize the weight between two points in the adjacency graph and make the weight reflect the scale information of different sample sets, an adaptive LE improved algorithm is used in this paper. Considering the influence of adjacent sample points and multi-scale data, the Euclidean distance between the k-th nearest sample point to sample point xi is used as the local scaling parameter σi of xi, instead of using a single scaling parameter σ. The efficiency of the algorithm is testified by applying on two public near-infrared data sets. LE-SVR and ALE-SVR models are established after LE and ALE dimension reduction of SNV preprocessed data sets. Compared with the LE-SVR model, the R2 and RPD of the ALE-SVR model established on the two data sets are improved, while RMSE is decreased, indicating that the prediction effect and stability of the regression model are established by the ALE algorithm are better than that of the traditional LE algorithm. Experiments show that the ALE algorithm can achieve a better dimensionality reduction effect than the LE algorithm.


Subject(s)
Algorithms
5.
Sensors (Basel) ; 22(9)2022 Apr 20.
Article in English | MEDLINE | ID: mdl-35590818

ABSTRACT

Laser-induced breakdown spectroscopy (LIBS) spectra often include many intensity lines, and obtaining meaningful information from the input dataset and condensing the dimensions of the original data has become a significant challenge in LIBS applications. This study was conducted to classify five different types of aluminum alloys rapidly and noninvasively, utilizing the manifold dimensionality reduction technique and a support vector machine (SVM) classifier model integrated with LIBS technology. The augmented partial residual plot was used to determine the nonlinearity of the LIBS spectra dataset. To circumvent the curse of dimensionality, nonlinear manifold learning techniques, such as local tangent space alignment (LTSA), local linear embedding (LLE), isometric mapping (Isomap), and Laplacian eigenmaps (LE) were used. The performance of linear techniques, such as principal component analysis (PCA) and multidimensional scaling (MDS), was also investigated compared to nonlinear techniques. The reduced dimensions of the dataset were assigned as input datasets in the SVM classifier. The prediction labels indicated that the Isomap-SVM model had the best classification performance with the classification accuracy, the number of dimensions and the number of nearest neighbors being 96.67%, 11, and 18, respectively. These findings demonstrate that the combination of nonlinear manifold learning and multivariate analysis has the potential to classify the samples based on LIBS with reasonable accuracy.


Subject(s)
Alloys , Aluminum , Lasers , Spectrum Analysis , Support Vector Machine
6.
Brief Bioinform ; 23(3)2022 05 13.
Article in English | MEDLINE | ID: mdl-35323901

ABSTRACT

MOTIVATION: MicroRNAs (miRNAs), as critical regulators, are involved in various fundamental and vital biological processes, and their abnormalities are closely related to human diseases. Predicting disease-related miRNAs is beneficial to uncovering new biomarkers for the prevention, detection, prognosis, diagnosis and treatment of complex diseases. RESULTS: In this study, we propose a multi-view Laplacian regularized deep factorization machine (DeepFM) model, MLRDFM, to predict novel miRNA-disease associations while improving the standard DeepFM. Specifically, MLRDFM improves DeepFM from two aspects: first, MLRDFM takes the relationships among items into consideration by regularizing their embedding features via their similarity-based Laplacians. In this study, miRNA Laplacian regularization integrates four types of miRNA similarity, while disease Laplacian regularization integrates two types of disease similarity. Second, to judiciously train our model, Laplacian eigenmaps are utilized to initialize the weights in the dense embedding layer. The experimental results on the latest HMDD v3.2 dataset show that MLRDFM improves the performance and reduces the overfitting phenomenon of DeepFM. Besides, MLRDFM is greatly superior to the state-of-the-art models in miRNA-disease association prediction in terms of different evaluation metrics with the 5-fold cross-validation. Furthermore, case studies further demonstrate the effectiveness of MLRDFM.


Subject(s)
MicroRNAs , Algorithms , Computational Biology/methods , Genetic Predisposition to Disease , Humans , MicroRNAs/genetics
7.
Magn Reson Med ; 87(1): 474-487, 2022 01.
Article in English | MEDLINE | ID: mdl-34390021

ABSTRACT

PURPOSE: For in vivo cardiac DTI, breathing motion and B0 field inhomogeneities produce misalignment and geometric distortion in diffusion-weighted (DW) images acquired with conventional single-shot EPI. We propose using a dimensionality reduction method to retrospectively estimate the respiratory phase of DW images and facilitate both distortion correction (DisCo) and motion compensation. METHODS: Free-breathing electrocardiogram-triggered whole left-ventricular cardiac DTI using a second-order motion-compensated spin echo EPI sequence and alternating directionality of phase encoding blips was performed on 11 healthy volunteers. The respiratory phase of each DW image was estimated after projecting the DW images into a 2D space with Laplacian eigenmaps. DisCo and motion compensation were applied to the respiratory sorted DW images. The results were compared against conventional breath-held T2 half-Fourier single shot turbo spin echo. Cardiac DTI parameters including fractional anisotropy, mean diffusivity, and helix angle transmurality were compared with and without DisCo. RESULTS: The left-ventricular geometries after DisCo and motion compensation resulted in significantly improved alignment of DW images with T2 reference. DisCo reduced the distance between the left-ventricular contours by 13.2% ± 19.2%, P < .05 (2.0 ± 0.4 for DisCo and 2.4 ± 0.5 mm for uncorrected). DisCo DTI parameter maps yielded no significant differences (mean diffusivity: 1.55 ± 0.13 × 10-3 mm2 /s and 1.53 ± 0.13 × 10-3 mm2 /s, P = .09; fractional anisotropy: 0.375 ± 0.041 and 0.379 ± 0.045, P = .11; helix angle transmurality: 1.00% ± 0.10°/% and 0.99% ± 0.12°/%, P = .44), although the orientation of individual tensors differed. CONCLUSION: Retrospective respiratory phase estimation with LE-based DisCo and motion compensation in free-breathing cardiac DTI resulting in significantly reduced geometric distortion and improved alignment within and across slices.


Subject(s)
Diffusion Magnetic Resonance Imaging , Diffusion Tensor Imaging , Echo-Planar Imaging , Humans , Motion , Reproducibility of Results , Retrospective Studies
8.
Biosensors (Basel) ; 11(5)2021 May 19.
Article in English | MEDLINE | ID: mdl-34069456

ABSTRACT

Classification performances for some classes of electrocardiographic (ECG) and electroencephalographic (EEG) signals processed to dimensionality reduction with different degrees are investigated. Results got with various classification methods are given and discussed. So far we investigated three techniques for reducing dimensionality: Laplacian eigenmaps (LE), locality preserving projections (LPP) and compressed sensing (CS). The first two methods are related to manifold learning while the third addresses signal acquisition and reconstruction from random projections under the supposition of signal sparsity. Our aim is to evaluate the benefits and drawbacks of various methods and to find to what extent they can be considered remarkable. The assessment of the effect of dimensionality decrease was made by considering the classification rates for the processed biosignals in the new spaces. Besides, the classification accuracies of the initial input data were evaluated with respect to the corresponding accuracies in the new spaces using different classifiers.


Subject(s)
Electrocardiography , Electroencephalography , Algorithms , Humans , Pattern Recognition, Automated
9.
Comput Biol Med ; 127: 104059, 2020 12.
Article in English | MEDLINE | ID: mdl-33171289

ABSTRACT

OBJECTIVE: Despite a long history of ECG-based monitoring of acute ischemia quantified by several widely used clinical markers, the diagnostic performance of these metrics is not yet satisfactory, motivating a data-driven approach to leverage underutilized information in the electrograms. This study introduces a novel metric for acute ischemia, created using a machine learning technique known as Laplacian eigenmaps (LE), and compares the diagnostic and temporal performance of the LE metric against traditional metrics. METHODS: The LE technique uses dimensionality reduction of simultaneously recorded time signals to map them into an abstract space in a manner that highlights the underlying signal behavior. To evaluate the performance of an electrogram-based LE metric compared to current standard approaches, we induced episodes of transient, acute ischemia in large animals and captured the electrocardiographic response using up to 600 electrodes within the intramural and epicardial domains. RESULTS: The LE metric generally detected ischemia earlier than all other approaches and with greater accuracy. Unlike other metrics derived from specific features of parts of the signals, the LE approach uses the entire signal and provides a data-driven strategy to identify features that reflect ischemia. CONCLUSION: The superior performance of the LE metric suggests there are underutilized features of electrograms that can be leveraged to detect the presence of acute myocardial ischemia earlier and more robustly than current methods. SIGNIFICANCE: The earlier detection capabilities of the LE metric on the epicardial surface provide compelling motivation to apply the same approach to ECGs recorded from the body surface.


Subject(s)
Electrocardiography , Myocardial Ischemia , Animals , Ischemia , Machine Learning , Myocardial Ischemia/diagnosis
10.
Neuroimage ; 221: 117140, 2020 11 01.
Article in English | MEDLINE | ID: mdl-32650053

ABSTRACT

There has been an increasing interest in examining organisational principles of the cerebral cortex (and subcortical regions) using different MRI features such as structural or functional connectivity. Despite the widespread interest, introductory tutorials on the underlying technique targeted for the novice neuroimager are sparse in the literature. Articles that investigate various "neural gradients" (for example based on region studied "cortical gradients," "cerebellar gradients," "hippocampal gradients" etc … or feature of interest "functional gradients," "cytoarchitectural gradients," "myeloarchitectural gradients" etc …) have increased in popularity. Thus, we believe that it is opportune to discuss what is generally meant by "gradient analysis". We introduce basics concepts in graph theory, such as graphs themselves, the degree matrix, and the adjacency matrix. We discuss how one can think about gradients of feature similarity (the similarity between timeseries in fMRI, or streamline in tractography) using graph theory and we extend this to explore such gradients across the whole MRI scale; from the voxel level to the whole brain level. We proceed to introduce a measure for quantifying the level of similarity in regions of interest. We propose the term "the Vogt-Bailey index" for such quantification to pay homage to our history as a brain mapping community. We run through the techniques on sample datasets including a brain MRI as an example of the application of the techniques on real data and we provide several appendices that expand upon details. To maximise intuition, the appendices contain a didactic example describing how one could use these techniques to solve a particularly pernicious problem that one may encounter at a wedding. Accompanying the article is a tool, available in both MATLAB and Python, that enables readers to perform the analysis described in this article on their own data. We refer readers to the graphical abstract as an overview of the analysis pipeline presented in this work.


Subject(s)
Brain/physiology , Connectome/methods , Magnetic Resonance Imaging/methods , Models, Theoretical , Nerve Net/physiology , Adult , Brain/diagnostic imaging , Humans , Nerve Net/diagnostic imaging
11.
Inverse Probl ; 36(2)2020 Feb.
Article in English | MEDLINE | ID: mdl-32394996

ABSTRACT

Single-particle electron cryomicroscopy is an essential tool for high-resolution 3D reconstruction of proteins and other biological macromolecules. An important challenge in cryo-EM is the reconstruction of non-rigid molecules with parts that move and deform. Traditional reconstruction methods fail in these cases, resulting in smeared reconstructions of the moving parts. This poses a major obstacle for structural biologists, who need high-resolution reconstructions of entire macromolecules, moving parts included. To address this challenge, we present a new method for the reconstruction of macromolecules exhibiting continuous heterogeneity. The proposed method uses projection images from multiple viewing directions to construct a graph Laplacian through which the manifold of three-dimensional conformations is analyzed. The 3D molecular structures are then expanded in a basis of Laplacian eigenvectors, using a novel generalized tomographic reconstruction algorithm to compute the expansion coefficients. These coefficients, which we name spectral volumes, provide a high-resolution visualization of the molecular dynamics. We provide a theoretical analysis and evaluate the method empirically on several simulated data sets.

12.
Proc IEEE Int Symp Biomed Imaging ; 2020: 1715-1719, 2020 Apr.
Article in English | MEDLINE | ID: mdl-36570366

ABSTRACT

In this paper, we propose a novel approach for manifold learning that combines the Earthmover's distance (EMD) with the diffusion maps method for dimensionality reduction. We demonstrate the potential benefits of this approach for learning shape spaces of proteins and other flexible macromolecules using a simulated dataset of 3-D density maps that mimic the non-uniform rotary motion of ATP synthase. Our results show that EMD-based diffusion maps require far fewer samples to recover the intrinsic geometry than the standard diffusion maps algorithm that is based on the Euclidean distance. To reduce the computational burden of calculating the EMD for all volume pairs, we employ a wavelet-based approximation to the EMD which reduces the computation of the pairwise EMDs to a computation of pairwise weighted- ℓ 1 distances between wavelet coefficient vectors.

13.
Cereb Cortex ; 30(1): 269-282, 2020 01 10.
Article in English | MEDLINE | ID: mdl-31044223

ABSTRACT

The human precuneus is involved in many high-level cognitive functions, which strongly suggests the existence of biologically meaningful subdivisions. However, the functional parcellation of the precuneus needs much to be investigated. In this study, we developed an eigen clustering (EIC) approach for the parcellation using precuneus-cortical functional connectivity from fMRI data of the Human Connectome Project. The EIC approach is robust to noise and can automatically determine the cluster number. It is consistently demonstrated that the human precuneus can be subdivided into six symmetrical and connected parcels. The anterior and posterior precuneus participate in sensorimotor and visual functions, respectively. The central precuneus with four subregions indicates a media role in the interaction of the default mode, dorsal attention, and frontoparietal control networks. The EIC-based functional parcellation is free of the spatial distance constraint and is more functionally coherent than parcellation using typical clustering algorithms. The precuneus subregions had high accordance with cortical morphology and revealed good functional segregation and integration characteristics in functional task-evoked activations. This study may shed new light on the human precuneus function at a delicate level and offer an alternative scheme for human brain parcellation.


Subject(s)
Connectome/methods , Parietal Lobe/anatomy & histology , Parietal Lobe/physiology , Adult , Cluster Analysis , Female , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Male , Neural Pathways/anatomy & histology , Neural Pathways/physiology , Young Adult
14.
Comput Biol Med ; 114: 103439, 2019 11.
Article in English | MEDLINE | ID: mdl-31550555

ABSTRACT

This paper presents SpCLUST, a new C++ package that takes a list of sequences as input, aligns them with MUSCLE, computes their similarity matrix in parallel and then performs the clustering. SpCLUST extends a previously released software by integrating additional scoring matrices which enables it to cover the clustering of amino-acid sequences. The similarity matrix is now computed in parallel according to the master/slave distributed architecture, using MPI. Performance analysis, realized on two real datasets of 100 nucleotide sequences and 1049 amino-acids ones, show that the resulting library substantially outperforms the original Python package. The proposed package was also intensively evaluated on simulated and real genomic and protein data sets. The clustering results were compared to the most known traditional tools, such as UCLUST, CD-HIT and DNACLUST. The comparison showed that SpCLUST outperforms the other tools when clustering divergent sequences, and contrary to the others, it does not require any user intervention or prior knowledge about the input sequences.


Subject(s)
Cluster Analysis , DNA , Genomics/methods , Sequence Analysis, DNA/methods , Software , Algorithms , DNA/classification , DNA/genetics , Humans
15.
Sensors (Basel) ; 19(13)2019 Jul 01.
Article in English | MEDLINE | ID: mdl-31266234

ABSTRACT

Magnetic resonance (MR) images are often corrupted by Rician noise which degrades the accuracy of image-based diagnosis tasks. The nonlocal means (NLM) method is a representative filter in denoising MR images due to its competitive denoising performance. However, the existing NLM methods usually exploit the gray-level information or hand-crafted features to evaluate the similarity between image patches, which is disadvantageous for preserving the image details while smoothing out noise. In this paper, an improved nonlocal means method is proposed for removing Rician noise in MR images by using the refined similarity measures. The proposed method firstly extracts the intrinsic features from the pre-denoised image using a shallow convolutional neural network named Laplacian eigenmaps network (LEPNet). Then, the extracted features are used for computing the similarity in the NLM method to produce the denoised image. Finally, the method noise of the denoised image is utilized to further improve the denoising performance. Specifically, the LEPNet model is composed of two cascaded convolutional layers and a nonlinear output layer, in which the Laplacian eigenmaps are employed to learn the filter bank in the convolutional layers and the Leaky Rectified Linear Unit activation function is used in the final output layer to output the nonlinear features. Due to the advantage of LEPNet in recovering the geometric structure of the manifold in the low-dimension space, the features extracted by this network can facilitate characterizing the self-similarity better than the existing NLM methods. Experiments have been performed on the BrainWeb phantom and the real images. Experimental results demonstrate that among several compared denoising methods, the proposed method can provide more effective noise removal and better details preservation in terms of human vision and such objective indexes as peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM).

16.
Sensors (Basel) ; 19(3)2019 Jan 27.
Article in English | MEDLINE | ID: mdl-30691205

ABSTRACT

Brushless direct current (BLDC) motors are the source of flight power during the operation of rotary-wing unmanned aerial vehicles (UAVs), and their working state directly affects the safety of the whole system. To predict and avoid motor faults, it is necessary to accurately understand the health degradation process of the motor before any fault occurs. However, in actual working conditions, due to the aerodynamic environmental conditions of the aircraft flight, the background noise components of the vibration signals characterizing the running state of the motor are complex and severely coupled, making it difficult for the weak degradation characteristics to be clearly reflected. To address these problems, a weak degradation characteristic extraction method based on variational mode decomposition (VMD) and Laplacian Eigenmaps (LE) was proposed in this study to precisely identify the degradation information in system health data, avoid the loss of critical information and the interference of redundant information, and to optimize the description of a motor's degradation process despite the presence of complex background noise. A validation experiment was conducted on a specific type of motor under operation with load, to obtain the degradation characteristics of multiple types of vibration signals, and to test the proposed method. The results proved that this method can improve the stability and accuracy of predicting motor health, thereby helping to predict the degradation state and to optimize the maintenance strategies.

17.
J Phon ; 71: 355-375, 2018 Nov.
Article in English | MEDLINE | ID: mdl-31439969

ABSTRACT

Low-dimensional representations of speech data, such as formant values extracted by linear predictive coding analysis or spectral moments computed from whole spectra viewed as probability distributions, have been instrumental in both phonetic and phonological analyses over the last few decades. In this paper, we present a framework for computing low-dimensional representations of speech data based on two assumptions: that speech data represented in high-dimensional data spaces lie on shapes called manifolds that can be used to map speech data to low-dimensional coordinate spaces, and that manifolds underlying speech data are generated from a combination of language-specific lexical, phonological, and phonetic information as well as culture-specific socio-indexical information that is expressed by talkers of a given speech community. We demonstrate the basic mechanics of the framework by carrying out an analysis of children's productions of sibilant fricatives relative to those of adults in their speech community using the phoneigen package - a publicly available implementation of the framework. We focus the demonstration on enumerating the steps for constructing manifolds from data and then using them to map the data to a low-dimensional space, explicating how manifold structure affects the learned low-dimensional representations, and comparing the use of these representations against standard acoustic features in a phonetic analysis. We conclude with a discussion of the framework's underlying assumptions, its broader modeling potential, and its position relative to recent advances in the field of representation learning.

18.
Neuroimage ; 169: 363-373, 2018 04 01.
Article in English | MEDLINE | ID: mdl-29246846

ABSTRACT

Independent component analysis (ICA) is a data-driven method that has been increasingly used for analyzing functional Magnetic Resonance Imaging (fMRI) data. However, generalizing ICA to multi-subject studies is non-trivial due to the high-dimensionality of the data, the complexity of the underlying neuronal processes, the presence of various noise sources, and inter-subject variability. Current group ICA based approaches typically use several forms of the Principal Component Analysis (PCA) method to extend ICA for generating group inferences. However, linear dimensionality reduction techniques have serious limitations including the fact that the underlying BOLD signal is a complex function of several nonlinear processes. In this paper, we propose an effective non-linear ICA-based model for extracting group-level spatial maps from multi-subject fMRI datasets. We use a non-linear dimensionality reduction algorithm based on Laplacian eigenmaps to identify a manifold subspace common to the group, such that this mapping preserves the correlation among voxels' time series as much as possible. These eigenmaps are modeled as linear mixtures of a set of group-level spatial features, which are then extracted using ICA. The resulting algorithm is called LEICA (Laplacian Eigenmaps for group ICA decomposition). We introduce a number of methods to evaluate LEICA using 100-subject resting state and 100-subject working memory task fMRI datasets from the Human Connectome Project (HCP). The test results show that the extracted spatial maps from LEICA are meaningful functional networks similar to those produced by some of the best known methods. Importantly, relative to state-of-the-art methods, our algorithm compares favorably in terms of the functional cohesiveness of the spatial maps generated, as well as in terms of the reproducibility of the results.


Subject(s)
Brain/diagnostic imaging , Functional Neuroimaging/methods , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Models, Theoretical , Nerve Net/diagnostic imaging , Adult , Brain/physiology , Functional Neuroimaging/standards , Humans , Image Processing, Computer-Assisted/standards , Magnetic Resonance Imaging/standards , Nerve Net/physiology , Reproducibility of Results
19.
Magn Reson Med ; 74(3): 868-78, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25199640

ABSTRACT

PURPOSE: To investigate whether nonlinear dimensionality reduction improves unsupervised classification of (1) H MRS brain tumor data compared with a linear method. METHODS: In vivo single-voxel (1) H magnetic resonance spectroscopy (55 patients) and (1) H magnetic resonance spectroscopy imaging (MRSI) (29 patients) data were acquired from histopathologically diagnosed gliomas. Data reduction using Laplacian eigenmaps (LE) or independent component analysis (ICA) was followed by k-means clustering or agglomerative hierarchical clustering (AHC) for unsupervised learning to assess tumor grade and for tissue type segmentation of MRSI data. RESULTS: An accuracy of 93% in classification of glioma grade II and grade IV, with 100% accuracy in distinguishing tumor and normal spectra, was obtained by LE with unsupervised clustering, but not with the combination of k-means and ICA. With (1) H MRSI data, LE provided a more linear distribution of data for cluster analysis and better cluster stability than ICA. LE combined with k-means or AHC provided 91% accuracy for classifying tumor grade and 100% accuracy for identifying normal tissue voxels. Color-coded visualization of normal brain, tumor core, and infiltration regions was achieved with LE combined with AHC. CONCLUSION: The LE method is promising for unsupervised clustering to separate brain and tumor tissue with automated color-coding for visualization of (1) H MRSI data after cluster analysis.


Subject(s)
Cluster Analysis , Magnetic Resonance Imaging/methods , Magnetic Resonance Spectroscopy/methods , Nonlinear Dynamics , Adult , Algorithms , Brain Neoplasms/chemistry , Brain Neoplasms/pathology , Humans , Pattern Recognition, Automated
20.
Neuroimage ; 94: 275-286, 2014 Jul 01.
Article in English | MEDLINE | ID: mdl-24657351

ABSTRACT

We propose a framework for feature extraction from learned low-dimensional subspaces that represent inter-subject variability. The manifold subspace is built from data-driven regions of interest (ROI). The regions are learned via sparse regression using the mini-mental state examination (MMSE) score as an independent variable which correlates better with the actual disease stage than a discrete class label. The sparse regression is used to perform variable selection along with a re-sampling scheme to reduce sampling bias. We then use the learned manifold coordinates to perform visualization and classification of the subjects. Results of the proposed approach are shown using the ADNI and ADNI-GO datasets. Three types of classification techniques, including a new MRI Disease-State-Score (MRI-DSS) classifier, are tested in conjunction with two learning strategies. In the first case Alzheimer's Disease (AD) and progressive mild cognitive impairment (pMCI) subjects were grouped together, while cognitive normal (CN) and stable mild cognitive impaired (sMCI) subjects were also grouped together. In the second approach, the classifiers are learned using the original class labels (with no grouping). We show results that are comparable to other state-of-the-art methods. A classification rate of 71%, of arguably the most clinically relevant subjects, sMCI and pMCI, is shown. Additionally, we present classification accuracies between CN and early MCI (eMCI) subjects, from the ADNI-GO dataset, of 65%. To our knowledge this is the first time classification accuracies for eMCI patients have been reported.


Subject(s)
Alzheimer Disease/diagnosis , Artificial Intelligence , Brain/pathology , Cognitive Dysfunction/diagnosis , Magnetic Resonance Imaging/methods , Models, Statistical , Pattern Recognition, Automated/methods , Aged , Aged, 80 and over , Algorithms , Alzheimer Disease/epidemiology , Causality , Cognitive Dysfunction/epidemiology , Comorbidity , Computer Simulation , Female , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Male , Middle Aged , Neuroimaging/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL