Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 175
Filtrar
1.
NMR Biomed ; : e5248, 2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39231762

RESUMO

Slice-to-volume registration and super-resolution reconstruction are commonly used to generate 3D volumes of the fetal brain from 2D stacks of slices acquired in multiple orientations. A critical initial step in this pipeline is to select one stack with the minimum motion among all input stacks as a reference for registration. An accurate and unbiased motion assessment (MA) is thus crucial for successful selection. Here, we presented an MA method that determines the minimum motion stack based on 3D low-rank approximation using CANDECOMP/PARAFAC (CP) decomposition. Compared to the current 2D singular value decomposition (SVD) based method that requires flattening stacks into matrices to obtain ranks, in which the spatial information is lost, the CP-based method can factorize 3D stack into low-rank and sparse components in a computationally efficient manner. The difference between the original stack and its low-rank approximation was proposed as the motion indicator. Experiments on linearly and randomly simulated motion illustrated that CP demonstrated higher sensitivity in detecting small motion with a lower baseline bias, and achieved a higher assessment accuracy of 95.45% in identifying the minimum motion stack, compared to the SVD-based method with 58.18%. CP also showed superior motion assessment capabilities in real-data evaluations. Additionally, combining CP with the existing SRR-SVR pipeline significantly improved 3D volume reconstruction. The results indicated that our proposed CP showed superior performance compared to SVD-based methods with higher sensitivity to motion, assessment accuracy, and lower baseline bias, and can be used as a prior step to improve fetal brain reconstruction.

2.
BioData Min ; 17(1): 30, 2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39232802

RESUMO

BACKGROUND: Identifying critical genes is important for understanding the pathogenesis of complex diseases. Traditional studies typically comparing the change of biomecules between normal and disease samples or detecting important vertices from a single static biomolecular network, which often overlook the dynamic changes that occur between different disease stages. However, investigating temporal changes in biomolecular networks and identifying critical genes is critical for understanding the occurrence and development of diseases. METHODS: A novel method called Quantifying Importance of Genes with Tensor Decomposition (QIGTD) was proposed in this study. It first constructs a time series network by integrating both the intra and inter temporal network information, which preserving connections between networks at adjacent stages according to the local similarities. A tensor is employed to describe the connections of this time series network, and a 3-order tensor decomposition method was proposed to capture both the topological information of each network snapshot and the time series characteristics of the whole network. QIGTD is also a learning-free and efficient method that can be applied to datasets with a small number of samples. RESULTS: The effectiveness of QIGTD was evaluated using lung adenocarcinoma (LUAD) datasets and three state-of-the-art methods: T-degree, T-closeness, and T-betweenness were employed as benchmark methods. Numerical experimental results demonstrate that QIGTD outperforms these methods in terms of the indices of both precision and mAP. Notably, out of the top 50 genes, 29 have been verified to be highly related to LUAD according to the DisGeNET Database, and 36 are significantly enriched in LUAD related Gene Ontology (GO) terms, including nuclear division, mitotic nuclear division, chromosome segregation, organelle fission, and mitotic sister chromatid segregation. CONCLUSION: In conclusion, QIGTD effectively captures the temporal changes in gene networks and identifies critical genes. It provides a valuable tool for studying temporal dynamics in biological networks and can aid in understanding the underlying mechanisms of diseases such as LUAD.

3.
J Med Signals Sens ; 14: 16, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39100745

RESUMO

In the past decade, tensors have become increasingly attractive in different aspects of signal and image processing areas. The main reason is the inefficiency of matrices in representing and analyzing multimodal and multidimensional datasets. Matrices cannot preserve the multidimensional correlation of elements in higher-order datasets and this highly reduces the effectiveness of matrix-based approaches in analyzing multidimensional datasets. Besides this, tensor-based approaches have demonstrated promising performances. These together, encouraged researchers to move from matrices to tensors. Among different signal and image processing applications, analyzing biomedical signals and images is of particular importance. This is due to the need for extracting accurate information from biomedical datasets which directly affects patient's health. In addition, in many cases, several datasets have been recorded simultaneously from a patient. A common example is recording electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) of a patient with schizophrenia. In such a situation, tensors seem to be among the most effective methods for the simultaneous exploitation of two (or more) datasets. Therefore, several tensor-based methods have been developed for analyzing biomedical datasets. Considering this reality, in this paper, we aim to have a comprehensive review on tensor-based methods in biomedical image analysis. The presented study and classification between different methods and applications can show the importance of tensors in biomedical image enhancement and open new ways for future studies.

4.
bioRxiv ; 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39131377

RESUMO

Effective tools for exploration and analysis are needed to extract insights from large-scale single-cell measurement data. However, current techniques for handling single-cell studies performed across experimental conditions (e.g., samples, perturbations, or patients) require restrictive assumptions, lack flexibility, or do not adequately deconvolute condition-to-condition variation from cell-to-cell variation. Here, we report that the tensor decomposition method PARAFAC2 (Pf2) enables the dimensionality reduction of single-cell data across conditions. We demonstrate these benefits across two distinct contexts of single-cell RNA-sequencing (scRNA-seq) experiments of peripheral immune cells: pharmacologic drug perturbations and systemic lupus erythematosus (SLE) patient samples. By isolating relevant gene modules across cells and conditions, Pf2 enables straightforward associations of gene variation patterns across specific patients or perturbations while connecting each coordinated change to certain cells without pre-defining cell types. The theoretical grounding of Pf2 suggests a unified framework for many modeling tasks associated with single-cell data. Thus, Pf2 provides an intuitive universal dimensionality reduction approach for multi-sample single-cell studies across diverse biological contexts.

5.
Brain Commun ; 6(4): fcae227, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39086629

RESUMO

Electrophysiologic disturbances due to neurodegenerative disorders such as Alzheimer's disease and Lewy Body disease are detectable by scalp EEG and can serve as a functional measure of disease severity. Traditional quantitative methods of EEG analysis often require an a-priori selection of clinically meaningful EEG features and are susceptible to bias, limiting the clinical utility of routine EEGs in the diagnosis and management of neurodegenerative disorders. We present a data-driven tensor decomposition approach to extract the top 6 spectral and spatial features representing commonly known sources of EEG activity during eyes-closed wakefulness. As part of their neurologic evaluation at Mayo Clinic, 11 001 patients underwent 12 176 routine, standard 10-20 scalp EEG studies. From these raw EEGs, we developed an algorithm based on posterior alpha activity and eye movement to automatically select awake-eyes-closed epochs and estimated average spectral power density (SPD) between 1 and 45 Hz for each channel. We then created a three-dimensional (3D) tensor (record × channel × frequency) and applied a canonical polyadic decomposition to extract the top six factors. We further identified an independent cohort of patients meeting consensus criteria for mild cognitive impairment (30) or dementia (39) due to Alzheimer's disease and dementia with Lewy Bodies (31) and similarly aged cognitively normal controls (36). We evaluated the ability of the six factors in differentiating these subgroups using a Naïve Bayes classification approach and assessed for linear associations between factor loadings and Kokmen short test of mental status scores, fluorodeoxyglucose (FDG) PET uptake ratios and CSF Alzheimer's Disease biomarker measures. Factors represented biologically meaningful brain activities including posterior alpha rhythm, anterior delta/theta rhythms and centroparietal beta, which correlated with patient age and EEG dysrhythmia grade. These factors were also able to distinguish patients from controls with a moderate to high degree of accuracy (Area Under the Curve (AUC) 0.59-0.91) and Alzheimer's disease dementia from dementia with Lewy Bodies (AUC 0.61). Furthermore, relevant EEG features correlated with cognitive test performance, PET metabolism and CSF AB42 measures in the Alzheimer's subgroup. This study demonstrates that data-driven approaches can extract biologically meaningful features from population-level clinical EEGs without artefact rejection or a-priori selection of channels or frequency bands. With continued development, such data-driven methods may improve the clinical utility of EEG in memory care by assisting in early identification of mild cognitive impairment and differentiating between different neurodegenerative causes of cognitive impairment.

6.
Front Neuroinform ; 18: 1399391, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39188665

RESUMO

Task-evoked functional magnetic resonance imaging studies, such as the Human Connectome Project (HCP), are a powerful tool for exploring how brain activity is influenced by cognitive tasks like memory retention, decision-making, and language processing. A fast Bayesian function-on-scalar model is proposed for estimating population-level activation maps linked to the working memory task. The model is based on the canonical polyadic (CP) tensor decomposition of coefficient maps obtained for each subject. This decomposition effectively yields a tensor basis capable of extracting both common features and subject-specific features from the coefficient maps. These subject-specific features, in turn, are modeled as a function of covariates of interest using a Bayesian model that accounts for the correlation of the CP-extracted features. The dimensionality reduction achieved with the tensor basis allows for a fast MCMC estimation of population-level activation maps. This model is applied to one hundred unrelated subjects from the HCP dataset, yielding significant insights into brain signatures associated with working memory.

7.
Sensors (Basel) ; 24(16)2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39204977

RESUMO

Bayesian tensor decomposition has been widely applied in channel parameter estimations, particularly in cases with the presence of interference. However, the types of interference are not considered in Bayesian tensor decomposition, making it difficult to accurately estimate the interference parameters. In this paper, we present a robust tensor variational method using a CANDECOMP/PARAFAC (CP)-based additive interference model for multiple input-multiple output (MIMO) with orthogonal frequency division multiplexing (OFDM) systems. A more realistic interference model compared to traditional colored noise is considered in terms of co-channel interference (CCI) and front-end interference (FEI). In contrast to conventional algorithms that filter out interference, the proposed method jointly estimates the channel and interference parameters in the time-frequency domain. Simulation results validate the correctness of the proposed method by the evidence lower bound (ELBO) and reveal the fact that the proposed method outperforms traditional information-theoretic methods, tensor decomposition models, and robust model based on CP (RCP) in terms of estimation accuracy. Further, the interference parameter estimation technique has profound implications for anti-interference applications and dynamic spectrum allocation.

8.
Entropy (Basel) ; 26(8)2024 Aug 17.
Artigo em Inglês | MEDLINE | ID: mdl-39202167

RESUMO

The Parallel Factor Analysis 2 (PARAFAC2) is a multimodal factor analysis model suitable for analyzing multi-way data when one of the modes has incomparable observation units, for example, because of differences in signal sampling or batch sizes. A fully probabilistic treatment of the PARAFAC2 is desirable to improve robustness to noise and provide a principled approach for determining the number of factors, but challenging because direct model fitting requires that factor loadings be decomposed into a shared matrix specifying how the components are consistently co-expressed across samples and sample-specific orthogonality-constrained component profiles. We develop two probabilistic formulations of the PARAFAC2 model along with variational Bayesian procedures for inference: In the first approach, the mean values of the factor loadings are orthogonal leading to closed form variational updates, and in the second, the factor loadings themselves are orthogonal using a matrix Von Mises-Fisher distribution. We contrast our probabilistic formulations to the conventional direct fitting algorithm based on maximum likelihood on synthetic data and real fluorescence spectroscopy and gas chromatography-mass spectrometry data showing that the probabilistic formulations are more robust to noise and model order misspecification. The probabilistic PARAFAC2, thus, forms a promising framework for modeling multi-way data accounting for uncertainty.

9.
Neural Netw ; 179: 106531, 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-39029296

RESUMO

As an effective strategy for reducing the noisy and redundant information for hyperspectral imagery (HSI), hyperspectral band selection intends to select a subset of original hyperspectral bands, which boosts the subsequent different tasks. In this paper, we introduce a multi-dimensional high-order structure preserved clustering method for hyperspectral band selection, referred to as MHSPC briefly. By regarding original hyperspectral images as a tensor cube, we apply the tensor CP (CANDECOMP/PARAFAC) decomposition on it to exploit the multi-dimensional structural information as well as generate a low-dimensional latent feature representation. In order to capture the local geometrical structure along the spectral dimension, a graph regularizer is imposed on the new feature representation in the lower dimensional space. In addition, since the low rankness of HSIs is an important global property, we utilize a nuclear norm constraint on the latent feature representation matrix to capture the global data structure information. Different to most of previous clustering based hyperspectral band selection methods which vectorize each band as a vector without considering the 2-D spatial information, the proposed MHSPC can effectively capture the spatial structure as well as the spectral correlation of original hyperspectral cube in both local and global perspectives. An efficient alternatively updating algorithm with theoretical convergence guarantee is designed to solve the resultant optimization problem, and extensive experimental results on four benchmark datasets validate the effectiveness of the proposed MHSPC over other state-of-the-arts.

10.
Neuroimage ; 295: 120667, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-38825216

RESUMO

Executive functions are essential for adaptive behavior. One executive function is the so-called 'interference control' or conflict monitoring another one is inhibitory control (i.e., action restraint and action cancelation). Recent evidence suggests an interplay of these processes, which is conceptually relevant given that newer conceptual frameworks imply that nominally different action/response control processes are explainable by a small set of cognitive and neurophysiological processes. The existence of such overarching neural principles has as yet not directly been examined. In the current study, we therefore use EEG tensor decomposition methods, to look into possible common neurophysiological signatures underlying conflict-modulated action restraint and action cancelation as mechanism underlying response inhibition. We show how conflicts differentially modulate action restraint and action cancelation processes and delineate common and distinct neural processes underlying this interplay. Concerning the spatial information modulations are similar in terms of an importance of processes reflected by parieto-occipital electrodes, suggesting that attentional selection processes play a role. Especially theta and alpha activity seem to play important roles. The data also show that tensor decomposition is sensitive to the manner of task implementation, thereby suggesting that switch probability/transitional probabilities should be taken into consideration when choosing tensor decomposition as analysis method. The study provides a blueprint of how to use tensor decomposition methods to delineate common and distinct neural mechanisms underlying action control functions using EEG data.


Assuntos
Conflito Psicológico , Eletroencefalografia , Função Executiva , Humanos , Eletroencefalografia/métodos , Masculino , Função Executiva/fisiologia , Feminino , Adulto , Adulto Jovem , Encéfalo/fisiologia , Inibição Psicológica , Desempenho Psicomotor/fisiologia
11.
Artigo em Inglês | MEDLINE | ID: mdl-38935246

RESUMO

PURPOSE: Parkinson disease (PD) is a common progressive neurodegenerative disorder in our ageing society. Early-stage PD biomarkers are desired for timely clinical intervention and understanding of pathophysiology. Since one of the characteristics of PD is the progressive loss of dopaminergic neurons in the substantia nigra pars compacta, we propose a feature extraction method for analysing the differences in the substantia nigra between PD and non-PD patients. METHOD: We propose a feature-extraction method for volumetric images based on a rank-1 tensor decomposition. Furthermore, we apply a feature selection method that excludes common features between PD and non-PD. We collect neuromelanin images of 263 patients: 124 PD and 139 non-PD patients and divide them into training and testing datasets for experiments. We then experimentally evaluate the classification accuracy of the substantia nigra between PD and non-PD patients using the proposed feature extraction method and linear discriminant analysis. RESULTS: The proposed method achieves a sensitivity of 0.72 and a specificity of 0.64 for our testing dataset of 66 non-PD and 42 PD patients. Furthermore, we visualise the important patterns in the substantia nigra by a linear combination of rank-1 tensors with selected features. The visualised patterns include the ventrolateral tier, where the severe loss of neurons can be observed in PD. CONCLUSIONS: We develop a new feature-extraction method for the analysis of the substantia nigra towards PD diagnosis. In the experiments, even though the classification accuracy with the proposed feature extraction method and linear discriminant analysis is lower than that of expert physicians, the results suggest the potential of tensorial feature extraction.

12.
Brief Bioinform ; 25(3)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38605639

RESUMO

The accurate identification of disease-associated genes is crucial for understanding the molecular mechanisms underlying various diseases. Most current methods focus on constructing biological networks and utilizing machine learning, particularly deep learning, to identify disease genes. However, these methods overlook complex relations among entities in biological knowledge graphs. Such information has been successfully applied in other areas of life science research, demonstrating their effectiveness. Knowledge graph embedding methods can learn the semantic information of different relations within the knowledge graphs. Nonetheless, the performance of existing representation learning techniques, when applied to domain-specific biological data, remains suboptimal. To solve these problems, we construct a biological knowledge graph centered on diseases and genes, and develop an end-to-end knowledge graph completion framework for disease gene prediction using interactional tensor decomposition named KDGene. KDGene incorporates an interaction module that bridges entity and relation embeddings within tensor decomposition, aiming to improve the representation of semantically similar concepts in specific domains and enhance the ability to accurately predict disease genes. Experimental results show that KDGene significantly outperforms state-of-the-art algorithms, whether existing disease gene prediction methods or knowledge graph embedding methods for general domains. Moreover, the comprehensive biological analysis of the predicted results further validates KDGene's capability to accurately identify new candidate genes. This work proposes a scalable knowledge graph completion framework to identify disease candidate genes, from which the results are promising to provide valuable references for further wet experiments. Data and source codes are available at https://github.com/2020MEAI/KDGene.


Assuntos
Disciplinas das Ciências Biológicas , Reconhecimento Automatizado de Padrão , Algoritmos , Aprendizado de Máquina , Semântica
13.
Sci Rep ; 14(1): 9098, 2024 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-38643209

RESUMO

Tucker decomposition is widely used for image representation, data reconstruction, and machine learning tasks, but the calculation cost for updating the Tucker core is high. Bilevel form of triple decomposition (TriD) overcomes this issue by decomposing the Tucker core into three low-dimensional third-order factor tensors and plays an important role in the dimension reduction of data representation. TriD, on the other hand, is incapable of precisely encoding similarity relationships for tensor data with a complex manifold structure. To address this shortcoming, we take advantage of hypergraph learning and propose a novel hypergraph regularized nonnegative triple decomposition for multiway data analysis that employs the hypergraph to model the complex relationships among the raw data. Furthermore, we develop a multiplicative update algorithm to solve our optimization problem and theoretically prove its convergence. Finally, we perform extensive numerical tests on six real-world datasets, and the results show that our proposed algorithm outperforms some state-of-the-art methods.

14.
JMIR Form Res ; 8: e53241, 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38648097

RESUMO

BACKGROUND: Electronic health records are a valuable source of patient information that must be properly deidentified before being shared with researchers. This process requires expertise and time. In addition, synthetic data have considerably reduced the restrictions on the use and sharing of real data, allowing researchers to access it more rapidly with far fewer privacy constraints. Therefore, there has been a growing interest in establishing a method to generate synthetic data that protects patients' privacy while properly reflecting the data. OBJECTIVE: This study aims to develop and validate a model that generates valuable synthetic longitudinal health data while protecting the privacy of the patients whose data are collected. METHODS: We investigated the best model for generating synthetic health data, with a focus on longitudinal observations. We developed a generative model that relies on the generalized canonical polyadic (GCP) tensor decomposition. This model also involves sampling from a latent factor matrix of GCP decomposition, which contains patient factors, using sequential decision trees, copula, and Hamiltonian Monte Carlo methods. We applied the proposed model to samples from the MIMIC-III (version 1.4) data set. Numerous analyses and experiments were conducted with different data structures and scenarios. We assessed the similarity between our synthetic data and the real data by conducting utility assessments. These assessments evaluate the structure and general patterns present in the data, such as dependency structure, descriptive statistics, and marginal distributions. Regarding privacy disclosure, our model preserves privacy by preventing the direct sharing of patient information and eliminating the one-to-one link between the observed and model tensor records. This was achieved by simulating and modeling a latent factor matrix of GCP decomposition associated with patients. RESULTS: The findings show that our model is a promising method for generating synthetic longitudinal health data that is similar enough to real data. It can preserve the utility and privacy of the original data while also handling various data structures and scenarios. In certain experiments, all simulation methods used in the model produced the same high level of performance. Our model is also capable of addressing the challenge of sampling patients from electronic health records. This means that we can simulate a variety of patients in the synthetic data set, which may differ in number from the patients in the original data. CONCLUSIONS: We have presented a generative model for producing synthetic longitudinal health data. The model is formulated by applying the GCP tensor decomposition. We have provided 3 approaches for the synthesis and simulation of a latent factor matrix following the process of factorization. In brief, we have reduced the challenge of synthesizing massive longitudinal health data to synthesizing a nonlongitudinal and significantly smaller data set.

15.
Cell Rep Methods ; 4(4): 100758, 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38631346

RESUMO

In recent years, data-driven inference of cell-cell communication has helped reveal coordinated biological processes across cell types. Here, we integrate two tools, LIANA and Tensor-cell2cell, which, when combined, can deploy multiple existing methods and resources to enable the robust and flexible identification of cell-cell communication programs across multiple samples. In this work, we show how the integration of our tools facilitates the choice of method to infer cell-cell communication and subsequently perform an unsupervised deconvolution to obtain and summarize biological insights. We explain how to perform the analysis step by step in both Python and R and provide online tutorials with detailed instructions available at https://ccc-protocols.readthedocs.io/. This workflow typically takes ∼1.5 h to complete from installation to downstream visualizations on a graphics processing unit-enabled computer for a dataset of ∼63,000 cells, 10 cell types, and 12 samples.


Assuntos
Comunicação Celular , Software , Comunicação Celular/fisiologia , Humanos , Biologia Computacional/métodos , Análise de Célula Única/métodos
16.
Chimia (Aarau) ; 78(4): 215-221, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38676612

RESUMO

Many complex chemical problems encoded in terms of physics-based models become computationally intractable for traditional numerical approaches due to their unfavorable scaling with increasing molecular size. Tensor decomposition techniques can overcome such challenges by decomposing unattainably large numerical representations of chemical problems into smaller, tractable ones. In the first two decades of this century, algorithms based on such tensor factorizations have become state-of-the-art methods in various branches of computational chemistry, ranging from molecular quantum dynamics to electronic structure theory and machine learning. Here, we consider the role that tensor decomposition schemes have played in expanding the scope of computational chemistry. We relate some of the most prominent methods to their common underlying tensor network formalisms, providing a unified perspective on leading tensor-based approaches in chemistry and materials science.

17.
Heliyon ; 10(4): e26365, 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38420472

RESUMO

Mild Cognitive Impairment (MCI) is the primary stage of acute Alzheimer's disease, and early detection is crucial for the person and those around him. It is difficult to recognize since this mild stage does not have clear clinical signs, and its symptoms are between normal aging and severe dementia. Here, we propose a tensor decomposition-based scheme for automatically diagnosing MCI using Electroencephalogram (EEG) signals. A new projection is proposed, which preserves the spatial information of the electrodes to construct a data tensor. Then, using parallel factor analysis (PARAFAC) tensor decomposition, the features are extracted, and a support vector machine (SVM) is used to discriminate MCI from normal subjects. The proposed scheme was tested on two different datasets. The results showed that the tensor-based method outperformed conventional methods in diagnosing MCI with an average classification accuracy of 93.96% and 78.65% for the first and second datasets, respectively. Therefore, it seems that maintaining the spatial topology of the signals plays a vital role in the processing of EEG signals.

18.
Sensors (Basel) ; 24(2)2024 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-38257420

RESUMO

Hyperspectral images (HSIs) contain abundant spectral and spatial structural information, but they are inevitably contaminated by a variety of noises during data reception and transmission, leading to image quality degradation and subsequent application hindrance. Hence, removing mixed noise from hyperspectral images is an important step in improving the performance of subsequent image processing. It is a well-established fact that the data information of hyperspectral images can be effectively represented by a global spectral low-rank subspace due to the high redundancy and correlation (RAC) in the spatial and spectral domains. Taking advantage of this property, a new algorithm based on subspace representation and nonlocal low-rank tensor decomposition is proposed to filter the mixed noise of hyperspectral images. The algorithm first obtains the subspace representation of the hyperspectral image by utilizing the spectral low-rank property and obtains the orthogonal basis and representation coefficient image (RCI). Then, the representation coefficient image is grouped and denoised using tensor decomposition and wavelet decomposition, respectively, according to the spatial nonlocal self-similarity. Afterward, the orthogonal basis and denoised representation coefficient image are optimized using the alternating direction method of multipliers (ADMM). Finally, iterative regularization is used to update the image to obtain the final denoised hyperspectral image. Experiments on both simulated and real datasets demonstrate that the algorithm proposed in this paper is superior to related mainstream methods in both quantitative metrics and intuitive vision. Because it is denoising for image subspace, the time complexity is greatly reduced and is lower than related denoising algorithms in terms of computational cost.

19.
bioRxiv ; 2024 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-38260447

RESUMO

Cortical parcellation has long been a cornerstone in the field of neuroscience, enabling the cerebral cortex to be partitioned into distinct, non-overlapping regions that facilitate the interpretation and comparison of complex neuroscientific data. In recent years, these parcellations have frequently been based on the use of resting-state fMRI (rsfMRI) data. In parallel, methods such as independent components analysis have long been used to identify large-scale functional networks with significant spatial overlap between networks. Despite the fact that both forms of decomposition make use of the same spontaneous brain activity measured with rsfMRI, a gap persists in establishing a clear relationship between disjoint cortical parcellations and brain-wide networks. To address this, we introduce a novel parcellation framework that integrates NASCAR, a three-dimensional tensor decomposition method that identifies a series of functional brain networks, with state-of-the-art graph representation learning to produce cortical parcellations that represent near-homogeneous functional regions that are consistent with these brain networks. Further, through the use of the tensor decomposition, we avoid the limitations of traditional approaches that assume statistical independence or orthogonality in defining the underlying networks. Our findings demonstrate that these parcellations are comparable or superior to established atlases in terms of homogeneity of the functional connectivity across parcels, task contrast alignment, and architectonic map alignment. Our methodological pipeline is highly automated, allowing for rapid adaptation to new datasets and the generation of custom parcellations in just minutes, a significant advancement over methods that require extensive manual input. We describe this integrated approach, which we refer to as Untamed, as a tool for use in the fields of cognitive and clinical neuroscientific research. Parcellations created from the Human Connectome Project dataset using Untamed, along with the code to generate atlases with custom parcel numbers, are publicly available at https://untamed-atlas.github.io.

20.
Neural Netw ; 169: 431-441, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37931474

RESUMO

Multi-dimensional data are common in many applications, such as videos and multi-variate time series. While tensor decomposition (TD) provides promising tools for analyzing such data, there still remains several limitations. First, traditional TDs assume multi-linear structures of the latent embeddings, which greatly limits their expressive power. Second, TDs cannot be straightforwardly applied to datasets with massive samples. To address these issues, we propose a nonparametric TD with amortized inference networks. Specifically, we establish a non-linear extension of tensor ring decomposition, using neural networks, to model complex latent structures. To jointly model the cross-sample correlations and physical structures, a matrix Gaussian process (GP) prior is imposed over the core tensors. From learning perspective, we develop a VAE-like amortized inference network to infer the posterior of core tensors corresponding to new tensor data, which enables TDs to be applied to large datasets. Our model can be also viewed as a kind of decomposition of VAE, which can additionally capture hidden tensor structure and enhance the expressiveness power. Finally, we derive an evidence lower bound such that a scalable optimization algorithm is developed. The advantages of our method have been evaluated extensively by data imputation on the Healing MNIST dataset and four multi-variate time series data.


Assuntos
Algoritmos , Aprendizagem , Redes Neurais de Computação , Distribuição Normal , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA