Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 531
Filtrar
1.
Sensors (Basel) ; 24(13)2024 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-39001022

RESUMEN

As higher spatiotemporal resolution tactile sensing systems are being developed for prosthetics, wearables, and other biomedical applications, they demand faster sampling rates and generate larger data streams. Sparsifying transformations can alleviate these requirements by enabling compressive sampling and efficient data storage through compression. However, research on the best sparsifying transforms for tactile interactions is lagging. In this work we construct a library of orthogonal and biorthogonal wavelet transforms as sparsifying transforms for tactile interactions and compare their tradeoffs in compression and sparsity. We tested the sparsifying transforms on a publicly available high-density tactile object grasping dataset (548 sensor tactile glove, grasping 26 objects). In addition, we investigated which dimension wavelet transform-1D, 2D, or 3D-would best compress these tactile interactions. Our results show that wavelet transforms are highly efficient at compressing tactile data and can lead to very sparse and compact tactile representations. Additionally, our results show that 1D transforms achieve the sparsest representations, followed by 3D, and lastly 2D. Overall, the best wavelet for coarse approximation is Symlets 4 evaluated temporally which can sparsify to 0.5% sparsity and compress 10-bit tactile data to an average of 0.04 bits per pixel. Future studies can leverage the results of this paper to assist in the compressive sampling of large tactile arrays and free up computational resources for real-time processing on computationally constrained mobile platforms like neuroprosthetics.

2.
J Bioinform Comput Biol ; 22(3): 2450007, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-39036848

RESUMEN

For sequencing-based spatial transcriptomics data, the gene-spot count matrix is highly sparse. This feature is similar to scRNA-seq. The goal of this paper is to identify whether there exist genes that are frequently under-detected in Visium compared to bulk RNA-seq, and the underlying potential mechanism of under-detection in Visium. We collected paired Visium and bulk RNA-seq data for 28 human samples and 19 mouse samples, which covered diverse tissue sources. We compared the two data types and observed that there indeed exists a collection of genes frequently under-detected in Visium compared to bulk RNA-seq. We performed a motif search to examine the last 350 bp of the frequently under-detected genes, and we observed that the poly (T) motif was significantly enriched in genes identified from both human and mouse data, which matches with our previous finding about frequently under-detected genes in scRNA-seq. We hypothesized that the poly (T) motif may be able to form a hairpin structure with the poly (A) tails of their mRNA transcripts, making it difficult for their mRNA transcripts to be captured during Visium library preparation.


Asunto(s)
Perfilación de la Expresión Génica , Transcriptoma , Ratones , Humanos , Animales , Perfilación de la Expresión Génica/métodos , ARN Mensajero/genética , ARN Mensajero/metabolismo , Análisis de Secuencia de ARN/métodos , Análisis de Secuencia de ARN/estadística & datos numéricos , RNA-Seq/métodos , Biología Computacional/métodos , Motivos de Nucleótidos
3.
Neural Netw ; 179: 106534, 2024 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-39059046

RESUMEN

As Deep Neural Networks (DNNs) continue to grow in complexity and size, leading to a substantial computational burden, weight pruning techniques have emerged as an effective solution. This paper presents a novel method for dynamic regularization-based pruning, which incorporates the Alternating Direction Method of Multipliers (ADMM). Unlike conventional methods that employ simple and abrupt threshold processing, the proposed method introduces a reweighting mechanism to assign importance to the weights in DNNs. Compared to other ADMM-based methods, the new method not only achieves higher accuracy but also saves considerable time thanks to the reduced number of necessary hyperparameters. The method is evaluated on multiple architectures, including LeNet-5, ResNet-32, ResNet-56, and ResNet-50, using the MNIST, CIFAR-10, and ImageNet datasets, respectively. Experimental results demonstrate its superior performance in terms of compression ratios and accuracy compared to state-of-the-art pruning methods. In particular, on the LeNet-5 model for the MNIST dataset, it achieves compression ratios of 355.9× with a slight improvement in accuracy; on the ResNet-50 model trained with the ImageNet dataset, it achieves compression ratios of 4.24× without sacrificing accuracy.

4.
J Am Stat Assoc ; 119(546): 1461-1472, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38974186

RESUMEN

We introduce and study two new inferential challenges associated with the sequential detection of change in a high-dimensional mean vector. First, we seek a confidence interval for the changepoint, and second, we estimate the set of indices of coordinates in which the mean changes. We propose an online algorithm that produces an interval with guaranteed nominal coverage, and whose length is, with high probability, of the same order as the average detection delay, up to a logarithmic factor. The corresponding support estimate enjoys control of both false negatives and false positives. Simulations confirm the effectiveness of our methodology, and we also illustrate its applicability on the U.S. excess deaths data from 2017 to 2020. The supplementary material, which contains the proofs of our theoretical results, is available online.

5.
J Med Imaging (Bellingham) ; 11(3): 033504, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38938501

RESUMEN

Purpose: We present a method that combines compressed sensing with parallel imaging that takes advantage of the structure of the sparsifying transformation. Approach: Previous work has combined compressed sensing with parallel imaging using model-based reconstruction but without taking advantage of the structured sparsity. Blurry images for each coil are reconstructed from the fully sampled center region. The optimization problem of compressed sensing is modified to take these blurry images into account, and it is solved to estimate the missing details. Results: Using data of brain, ankle, and shoulder anatomies, the combination of compressed sensing with structured sparsity and parallel imaging reconstructs an image with a lower relative error than does sparse SENSE or L1 ESPIRiT, which do not use structured sparsity. Conclusions: Taking advantage of structured sparsity improves the image quality for a given amount of data as long as a fully sampled region centered on the zero frequency of the appropriate size is acquired.

6.
Ultrasonics ; 142: 107390, 2024 Jun 22.
Artículo en Inglés | MEDLINE | ID: mdl-38945018

RESUMEN

Standard structural health monitoring techniques face well-known difficulties for comprehensive defect diagnosis in real-world structures that have structural, material, or geometric complexity. This motivates the exploration of machine-learning-based structural health monitoring methods in complex structures. However, creating sufficient training data sets with various defects is an ongoing challenge for data-driven machine (deep) learning algorithms. The ability to transfer the knowledge of a trained neural network from one component to another or to other sections of the same component would drastically reduce the required training data set. Also, it would facilitate computationally inexpensive machine learning based inspection systems. In this work, a machine-learning-based multi-level damage characterization is demonstrated with the ability to transfer trained knowledge within the sparse sensor network. A novel network spatial assistance and an adaptive convolution technique are proposed for efficient knowledge transfer within the deep learning algorithm. Proposed structural health monitoring method is experimentally evaluated on an aluminum plate with artificially induced defects. It was observed that the method improves the performance of knowledge transferred damage characterization by 50 % during localization and 24 % during severity assessment. Further, experiments using time windows with and without multiple edge reflections are studied. Results reveal that multiply scattered waves contain rich and deterministic defect signatures that can be mined using deep learning neural networks, improving the accuracy of both identification and quantification. In the case of a fixed sensor network, using multiply scattered waves shows 100 % prediction accuracy at all levels of damage characterization.

7.
J Appl Stat ; 51(8): 1497-1523, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38863802

RESUMEN

Plant breeders want to develop cultivars that outperform existing genotypes. Some characteristics (here 'main traits') of these cultivars are categorical and difficult to measure directly. It is important to predict the main trait of newly developed genotypes accurately. In addition to marker data, breeding programs often have information on secondary traits (or 'phenotypes') that are easy to measure. Our goal is to improve prediction of main traits with interpretable relations by combining the two data types using variable selection techniques. However, the genomic characteristics can overwhelm the set of secondary traits, so a standard technique may fail to select any phenotypic variables. We develop a new statistical technique that ensures appropriate representation from both the secondary traits and the genotypic variables for optimal prediction. When two data types (markers and secondary traits) are available, we achieve improved prediction of a binary trait by two steps that are designed to ensure that a significant intrinsic effect of a phenotype is incorporated in the relation before accounting for extra effects of genotypes. First, we sparsely regress the secondary traits on the markers and replace the secondary traits by their residuals to obtain the effects of phenotypic variables as adjusted by the genotypic variables. Then, we develop a sparse logistic classifier using the markers and residuals so that the adjusted phenotypes may be selected first to avoid being overwhelmed by the genotypic variables due to their numerical advantage. This classifier uses forward selection aided by a penalty term and can be computed effectively by a technique called the one-pass method. It compares favorably with other classifiers on simulated and real data.

8.
JMIR Med Inform ; 12: e50209, 2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38896468

RESUMEN

BACKGROUND: Diagnostic errors pose significant health risks and contribute to patient mortality. With the growing accessibility of electronic health records, machine learning models offer a promising avenue for enhancing diagnosis quality. Current research has primarily focused on a limited set of diseases with ample training data, neglecting diagnostic scenarios with limited data availability. OBJECTIVE: This study aims to develop an information retrieval (IR)-based framework that accommodates data sparsity to facilitate broader diagnostic decision support. METHODS: We introduced an IR-based diagnostic decision support framework called CliniqIR. It uses clinical text records, the Unified Medical Language System Metathesaurus, and 33 million PubMed abstracts to classify a broad spectrum of diagnoses independent of training data availability. CliniqIR is designed to be compatible with any IR framework. Therefore, we implemented it using both dense and sparse retrieval approaches. We compared CliniqIR's performance to that of pretrained clinical transformer models such as Clinical Bidirectional Encoder Representations from Transformers (ClinicalBERT) in supervised and zero-shot settings. Subsequently, we combined the strength of supervised fine-tuned ClinicalBERT and CliniqIR to build an ensemble framework that delivers state-of-the-art diagnostic predictions. RESULTS: On a complex diagnosis data set (DC3) without any training data, CliniqIR models returned the correct diagnosis within their top 3 predictions. On the Medical Information Mart for Intensive Care III data set, CliniqIR models surpassed ClinicalBERT in predicting diagnoses with <5 training samples by an average difference in mean reciprocal rank of 0.10. In a zero-shot setting where models received no disease-specific training, CliniqIR still outperformed the pretrained transformer models with a greater mean reciprocal rank of at least 0.10. Furthermore, in most conditions, our ensemble framework surpassed the performance of its individual components, demonstrating its enhanced ability to make precise diagnostic predictions. CONCLUSIONS: Our experiments highlight the importance of IR in leveraging unstructured knowledge resources to identify infrequently encountered diagnoses. In addition, our ensemble framework benefits from combining the complementary strengths of the supervised and retrieval-based models to diagnose a broad spectrum of diseases.

9.
Phys Med Biol ; 69(14)2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-38917824

RESUMEN

Objective.A model-based alternating reconstruction coupling fitting, termed Model-based Alternating Reconstruction COupling fitting (MARCO), is proposed for accurate and fast magnetic resonance parameter mapping.Approach.MARCO utilizes the signal model as a regularization by minimizing the bias between the image series and the signal produced by the suitable signal model based on iteratively updated parameter maps when reconstructing. The technique can incorporate prior knowledge of both image series and parameters by adding sparsity constraints. The optimization problem is decomposed into three subproblems and solved through three alternating steps involving reconstruction and nonlinear least-square fitting, which can produce both contrast-weighted images and parameter maps simultaneously.Main results.The algorithm is applied toT2mapping with extended phase graph algorithm integrated and validated on undersampled multi-echo spin-echo data from both phantom and in vivo sources. Compared with traditional compressed sensing and model-based methods, the proposed approach yields more accurateT2maps with more details at high acceleration factors.Significance.The proposed method provides a basic framework for quantitative MR relaxometry, theoretically applicable to all quantitative MR relaxometry. It has the potential to improve the diagnostic utility of quantitative imaging techniques.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Humanos , Factores de Tiempo , Encéfalo/diagnóstico por imagen
10.
Magn Reson Med ; 92(3): 1232-1247, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38748852

RESUMEN

PURPOSE: We present SCAMPI (Sparsity Constrained Application of deep Magnetic resonance Priors for Image reconstruction), an untrained deep Neural Network for MRI reconstruction without previous training on datasets. It expands the Deep Image Prior approach with a multidomain, sparsity-enforcing loss function to achieve higher image quality at a faster convergence speed than previously reported methods. METHODS: Two-dimensional MRI data from the FastMRI dataset with Cartesian undersampling in phase-encoding direction were reconstructed for different acceleration rates for single coil and multicoil data. RESULTS: The performance of our architecture was compared to state-of-the-art Compressed Sensing methods and ConvDecoder, another untrained Neural Network for two-dimensional MRI reconstruction. SCAMPI outperforms these by better reducing undersampling artifacts and yielding lower error metrics in multicoil imaging. In comparison to ConvDecoder, the U-Net architecture combined with an elaborated loss-function allows for much faster convergence at higher image quality. SCAMPI can reconstruct multicoil data without explicit knowledge of coil sensitivity profiles. Moreover, it is a novel tool for reconstructing undersampled single coil k-space data. CONCLUSION: Our approach avoids overfitting to dataset features, that can occur in Neural Networks trained on databases, because the network parameters are tuned only on the reconstruction data. It allows better results and faster reconstruction than the baseline untrained Neural Network approach.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Artefactos , Encéfalo/diagnóstico por imagen , Compresión de Datos/métodos
11.
Ecol Evol ; 14(5): e11039, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38774136

RESUMEN

In Bayesian Network Regression models, networks are considered the predictors of continuous responses. These models have been successfully used in brain research to identify regions in the brain that are associated with specific human traits, yet their potential to elucidate microbial drivers in biological phenotypes for microbiome research remains unknown. In particular, microbial networks are challenging due to their high dimension and high sparsity compared to brain networks. Furthermore, unlike in brain connectome research, in microbiome research, it is usually expected that the presence of microbes has an effect on the response (main effects), not just the interactions. Here, we develop the first thorough investigation of whether Bayesian Network Regression models are suitable for microbial datasets on a variety of synthetic and real data under diverse biological scenarios. We test whether the Bayesian Network Regression model that accounts only for interaction effects (edges in the network) is able to identify key drivers (microbes) in phenotypic variability. We show that this model is indeed able to identify influential nodes and edges in the microbial networks that drive changes in the phenotype for most biological settings, but we also identify scenarios where this method performs poorly which allows us to provide practical advice for domain scientists aiming to apply these tools to their datasets. BNR models provide a framework for microbiome researchers to identify connections between microbes and measured phenotypes. We allow the use of this statistical model by providing an easy-to-use implementation which is publicly available Julia package at https://github.com/solislemuslab/BayesianNetworkRegression.jl.

12.
Comput Methods Programs Biomed ; 251: 108212, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38754327

RESUMEN

BACKGROUND AND OBJECTIVE: There is a rising interest in exploiting aggregate information from external medical studies to enhance the statistical analysis of a modestly sized internal dataset. Currently available software packages for analyzing survival data with a cure fraction ignore the potentially available auxiliary information. This paper aims at filling this gap by developing a new R package CureAuxSP that can include subgroup survival probabilities extracted outside into an interested internal survival dataset. METHODS: The newly developed R package CureAuxSP provides an efficient approach for information synthesis under the mixture cure models, including Cox proportional hazards mixture cure model and the accelerated failure time mixture cure model as special cases. It focuses on synthesizing subgroup survival probabilities at multiple time points and the underlying method development lies in the control variate technique. Evaluation of homogeneity assumption based on a test statistic can be automatically carried out by our package and if heterogeneity does exist, the original outputs can be further refined adaptively. RESULTS: The R package CureAuxSP provides a main function SMC.AxuSP() that helps us adaptively incorporate external subgroup survival probabilities into the analysis of an internal survival data. We also provide another function Print.SMC.AuxSP() for printing the results with a better presentation. Detailed usages are described, and implementations are illustrated with numerical examples, including a simulated dataset with a well-designed data generating process and a real breast cancer dataset. Substantial efficiency gain can be observed by our results. CONCLUSIONS: Our R package CureAuxSP can make the wide applications of utilizing auxiliary information possible. It is anticipated that the performance of mixture cure models can be improved for the survival data with a cure fraction, especially for those with small sample sizes.


Asunto(s)
Probabilidad , Modelos de Riesgos Proporcionales , Programas Informáticos , Humanos , Análisis de Supervivencia , Modelos Estadísticos , Simulación por Computador , Algoritmos , Neoplasias de la Mama/mortalidad , Neoplasias de la Mama/terapia
13.
Phys Med Biol ; 69(11)2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38636505

RESUMEN

Objective.Pharmacokinetic parametric images obtained through dynamic fluorescence molecular tomography (DFMT) has ability of capturing dynamic changes in fluorescence concentration, thereby providing three-dimensional metabolic information for applications in biological research and drug development. However, data processing of DFMT is time-consuming, involves a vast amount of data, and the problem itself is ill-posed, which significantly limits the application of pharmacokinetic parametric images reconstruction. In this study, group sparse-based Taylor expansion method is proposed to address these problems.Approach.Firstly, Taylor expansion framework is introduced to reduce time and computational cost. Secondly, group sparsity based on structural prior is introduced to improve reconstruction accuracy. Thirdly, alternating iterative solution based on accelerated gradient descent algorithm is introduced to solve the problem.Main results.Numerical simulation andin vivoexperimental results demonstrate that, in comparison to existing methods, the proposed approach significantly enhances reconstruction speed without a degradation of quality, particularly when confronted with background fluorescence interference from other organs.Significance.Our research greatly reduces time and computational cost, providing strong support for real-time monitoring of liver metabolism.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Hígado , Hígado/diagnóstico por imagen , Hígado/metabolismo , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Tomografía/métodos , Ratones , Imagen Óptica/métodos , Algoritmos , Fluorescencia
14.
Biomed Tech (Berl) ; 2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38598849

RESUMEN

OBJECTIVES: In the past, guided image filtering (GIF)-based methods often utilized total variation (TV)-based methods to reconstruct guidance images. And they failed to reconstruct the intricate details of complex clinical images accurately. To address these problems, we propose a new sparse-view CT reconstruction method based on group-based sparse representation using weighted guided image filtering. METHODS: In each iteration of the proposed algorithm, the result constrained by the group-based sparse representation (GSR) is used as the guidance image. Then, the weighted guided image filtering (WGIF) was used to transfer the important features from the guidance image to the reconstruction of the SART method. RESULTS: Three representative slices were tested under 64 projection views, and the proposed method yielded the best visual effect. For the shoulder case, the PSNR can achieve 48.82, which is far superior to other methods. CONCLUSIONS: The experimental results demonstrate that our method is more effective in preserving structures, suppressing noise, and reducing artifacts compared to other methods.

15.
Sci Rep ; 14(1): 7816, 2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38570539

RESUMEN

Given the challenges of inter-domain information fusion and data sparsity in collaborative filtering algorithms, this paper proposes a cross-domain information fusion matrix decomposition algorithm to enhance the accuracy of personalized recommendations in artificial intelligence recommendation systems. The study begins by collecting Douban movie rating data and social network information. To ensure data integrity, Levenshtein distance detection is employed to remove duplicate scores, while natural language processing technology is utilized to extract keywords and topic information from social texts. Additionally, graph convolutional networks are utilized to convert user relationships into feature vectors, and a unique thermal coding method is used to convert discrete user and movie information into binary matrices. To prevent overfitting, the Ridge regularization method is introduced to gradually optimize potential feature vectors. Weighted average and feature connection techniques are then applied to integrate features from different fields. Moreover, the paper combines the item-based collaborative filtering algorithm with merged user characteristics to generate personalized recommendation lists.In the experimental stage, the paper conducts cross-domain information fusion optimization on four mainstream mathematical matrix decomposition algorithms: alternating least squares method, non-negative matrix decomposition, singular value decomposition, and latent factor model (LFM). It compares these algorithms with the non-fused approach. The results indicate a significant improvement in score accuracy, with mean absolute error and root mean squared error reduced by 12.8% and 13.2% respectively across the four algorithms. Additionally, when k = 10, the average F1 score reaches 0.97, and the ranking accuracy coverage of the LFM algorithm increases by 54.2%. Overall, the mathematical matrix decomposition algorithm combined with cross-domain information fusion demonstrates clear advantages in accuracy, prediction performance, recommendation diversity, and ranking quality, and improves the accuracy and diversity of the recommendation system. By effectively addressing collaborative filtering challenges through the integration of diverse techniques, it significantly surpasses traditional models in recommendation accuracy and variety.

16.
J Appl Stat ; 51(6): 1151-1170, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38628447

RESUMEN

The growing popularity of personalized medicine motivates people to explore individualized treatment regimes according to heterogeneous characteristics of the patients. For the large-scale data analysis, however, the data are collected at different times and different locations, i.e. subjects are usually from a heterogeneous population, which causes that the optimal treatment regimes also vary for patients across different subgroups. In this paper, we mainly focus on the estimation of optimal treatment regimes for subjects come from a heterogeneous population with high-dimensional data. We first remove the main effects of the covariates for each subgroup to eliminate non-ignorable residual confounding. Based on the centralized outcome, we propose a penalized robust learning that estimates the coefficient matrix of the interactions between covariates and treatment by penalizing pairwise differences of the coefficients of any two subgroups for the same covariate, which can automatically identify the latent complex structure of the coefficient matrix with heterogeneous and homogeneous columns. At the same time, the penalized robust learning can also select the important variables that truly contribute to the individualized treatment decisions with commonly used sparsity structure penalty. Extensive simulation studies show that our proposed method outperforms current popular methods, and it is further illustrated in the real analysis of the Tamoxifen breast cancer data.

17.
Math Biosci Eng ; 21(2): 2646-2670, 2024 Jan 19.
Artículo en Inglés | MEDLINE | ID: mdl-38454700

RESUMEN

Research on functional changes in the brain of inflammatory bowel disease (IBD) patients is emerging around the world, which brings new perspectives to medical research. In this paper, the methods of canonical correlation analysis (CCA), kernel canonical correlation analysis (KCCA), and sparsity preserving canonical correlation analysis (SPCCA) were applied to the fusion of simultaneous EEG-fMRI data from 25 IBD patients and 15 healthy individuals. The CCA, KCCA and SPCCA fusion methods were used for data processing to compare the results obtained by the three methods. The results clearly show that there is a significant difference in the activation intensity between IBD and healthy control (HC), not only in the frontal lobe (p < 0.01) and temporal lobe (p < 0.01) regions, but also in the posterior cingulate gyrus (p < 0.01), gyrus rectus (p < 0.01), and amygdala (p < 0.01) regions, which are usually neglected. The mean difference in the SPCCA activation intensity was 60.1. However, the mean difference in activation intensity was only 36.9 and 49.8 by using CCA and KCCA. In addition, the correlation of the relevant components selected during the SPCCA calculation was high, with correlation components of up to 0.955; alternatively, the correlations obtained from CCA and KCCA calculations were only 0.917 and 0.926, respectively. It can be seen that SPCCA is indeed superior to CCA and KCCA in processing high-dimensional multimodal data. This work reveals the process of analyzing the brain activation state in IBD disease, provides a further perspective for the study of brain function, and opens up a new avenue for studying the SPCCA method and the change in the intensity of brain activation in IBD disease.


Asunto(s)
Análisis de Correlación Canónica , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Electroencefalografía , Mapeo Encefálico/métodos
18.
Entropy (Basel) ; 26(2)2024 Jan 24.
Artículo en Inglés | MEDLINE | ID: mdl-38392360

RESUMEN

As a promising data analysis technique, sparse modeling has gained widespread traction in the field of image processing, particularly for image recovery. The matrix rank, served as a measure of data sparsity, quantifies the sparsity within the Kronecker basis representation of a given piece of data in the matrix format. Nevertheless, in practical scenarios, much of the data are intrinsically multi-dimensional, and thus, using a matrix format for data representation will inevitably yield sub-optimal outcomes. Tensor decomposition (TD), as a high-order generalization of matrix decomposition, has been widely used to analyze multi-dimensional data. In a direct generalization to the matrix rank, low-rank tensor modeling has been developed for multi-dimensional data analysis and achieved great success. Despite its efficacy, the connection between TD rank and the sparsity of the tensor data is not direct. In this work, we introduce a novel tensor ring sparsity measurement (TRSM) for measuring the sparsity of the tensor. This metric relies on the tensor ring (TR) Kronecker basis representation of the tensor, providing a unified interpretation akin to matrix sparsity measurements, wherein the Kronecker basis serves as the foundational representation component. Moreover, TRSM can be efficiently computed by the product of the ranks of the mode-2 unfolded TR-cores. To enhance the practical performance of TRSM, the folded-concave penalty of the minimax concave penalty is introduced as a nonconvex relaxation. Lastly, we extend the TRSM to the tensor completion problem and use the alternating direction method of the multipliers scheme to solve it. Experiments on image and video data completion demonstrate the effectiveness of the proposed method.

19.
Entropy (Basel) ; 26(2)2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38392397

RESUMEN

This paper expands traditional stochastic volatility models by allowing for time-varying skewness without imposing it. While dynamic asymmetry may capture the likely direction of future asset returns, it comes at the risk of leading to overparameterization. Our proposed approach mitigates this concern by leveraging sparsity-inducing priors to automatically select the skewness parameter as dynamic, static or zero in a data-driven framework. We consider two empirical applications. First, in a bond yield application, dynamic skewness captures interest rate cycles of monetary easing and tightening and is partially explained by central banks' mandates. In a currency modeling framework, our model indicates no skewness in the carry factor after accounting for stochastic volatility. This supports the idea of carry crashes resulting from volatility surges instead of dynamic skewness.

20.
Biometrics ; 80(1)2024 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-38364807

RESUMEN

When building regression models for multivariate abundance data in ecology, it is important to allow for the fact that the species are correlated with each other. Moreover, there is often evidence species exhibit some degree of homogeneity in their responses to each environmental predictor, and that most species are informed by only a subset of predictors. We propose a generalized estimating equation (GEE) approach for simultaneous homogeneity pursuit (ie, grouping species with similar coefficient values while allowing differing groups for different covariates) and variable selection in regression models for multivariate abundance data. Using GEEs allows us to straightforwardly account for between-response correlations through a (reduced-rank) working correlation matrix. We augment the GEE with both adaptive fused lasso- and adaptive lasso-type penalties, which aim to cluster the species-specific coefficients within each covariate and encourage differing levels of sparsity across the covariates, respectively. Numerical studies demonstrate the strong finite sample performance of the proposed method relative to several existing approaches for modeling multivariate abundance data. Applying the proposed method to presence-absence records collected along the Great Barrier Reef in Australia reveals both a substantial degree of homogeneity and sparsity in species-environmental relationships. We show this leads to a more parsimonious model for understanding the environmental drivers of seabed biodiversity, and results in stronger out-of-sample predictive performance relative to methods that do not accommodate such features.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA