Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 541
Filtrar
1.
IEEE Trans Comput Imaging ; 10: 372-384, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39386353

RESUMO

Model-based methods are widely used for reconstruction in compressed sensing (CS) magnetic resonance imaging (MRI), using regularizers to describe the images of interest. The reconstruction process is equivalent to solving a composite optimization problem. Accelerated proximal methods (APMs) are very popular approaches for such problems. This paper proposes a complex quasi-Newton proximal method (CQNPM) for the wavelet and total variation based CS MRI reconstruction. Compared with APMs, CQNPM requires fewer iterations to converge but needs to compute a more challenging proximal mapping called weighted proximal mapping (WPM). To make CQNPM more practical, we propose efficient methods to solve the related WPM. Numerical experiments on reconstructing non-Cartesian MRI data demonstrate the effectiveness and efficiency of CQNPM.

2.
Front Big Data ; 7: 1348030, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39267704

RESUMO

Introduction: Recently, Google introduced Pathways as its next-generation AI architecture. Pathways must address three critical challenges: learning one general model for several continuous tasks, ensuring tasks can leverage each other without forgetting old tasks, and learning from multi-modal data such as images and audio. Additionally, Pathways must maintain sparsity in both learning and deployment. Current lifelong multi-task learning approaches are inadequate in addressing these challenges. Methods: To address these challenges, we propose SEN, a Sparse and Expandable Network. SEN is designed to handle multiple tasks concurrently by maintaining sparsity and enabling expansion when new tasks are introduced. The network leverages multi-modal data, integrating information from different sources while preventing interference between tasks. Results: The proposed SEN model demonstrates significant improvements in multi-task learning, successfully managing task interference and forgetting. It effectively integrates data from various modalities and maintains efficiency through sparsity during both the learning and deployment phases. Discussion: SEN offers a straightforward yet effective solution to the limitations of current lifelong multi-task learning methods. By addressing the challenges identified in the Pathways architecture, SEN provides a promising approach for developing AI systems capable of learning and adapting over time without sacrificing performance or efficiency.

3.
J Neurosci Methods ; 411: 110275, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39241968

RESUMO

BACKGROUND: There is growing interest in understanding the dynamic functional connectivity (DFC) between distributed brain regions. However, it remains challenging to reliably estimate the temporal dynamics from resting-state functional magnetic resonance imaging (rs-fMRI) due to the limitations of current methods. NEW METHODS: We propose a new model called HDP-HSMM-BPCA for sparse DFC analysis of high-dimensional rs-fMRI data, which is a temporal extension of probabilistic principal component analysis using Bayesian nonparametric hidden semi-Markov model (HSMM). Specifically, we utilize a hierarchical Dirichlet process (HDP) prior to remove the parametric assumption of the HMM framework, overcoming the limitations of the standard HMM. An attractive superiority is its ability to automatically infer the state-specific latent space dimensionality within the Bayesian formulation. RESULTS: The experiment results of synthetic data show that our model outperforms the competitive models with relatively higher estimation accuracy. In addition, the proposed framework is applied to real rs-fMRI data to explore sparse DFC patterns. The findings indicate that there is a time-varying underlying structure and sparse DFC patterns in high-dimensional rs-fMRI data. COMPARISON WITH EXISTING METHODS: Compared with the existing DFC approaches based on HMM, our method overcomes the limitations of standard HMM. The observation model of HDP-HSMM-BPCA can discover the underlying temporal structure of rs-fMRI data. Furthermore, the relevant sparse DFC construction algorithm provides a scheme for estimating sparse DFC. CONCLUSION: We describe a new computational framework for sparse DFC analysis to discover the underlying temporal structure of rs-fMRI data, which will facilitate the study of brain functional connectivity.


Assuntos
Teorema de Bayes , Encéfalo , Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos , Humanos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Descanso/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Mapeamento Encefálico/métodos , Cadeias de Markov , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiologia , Análise de Componente Principal , Algoritmos , Modelos Neurológicos , Simulação por Computador
4.
J Am Stat Assoc ; 119(546): 1579-1591, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39296805

RESUMO

High dimensional linear models are commonly used in practice. In many applications, one is interested in linear transformations ß âŠ¤ x of regression coefficients ß ∈ R p , where x is a specific point and is not required to be identically distributed as the training data. One common approach is the plug-in technique which first estimates ß , then plugs the estimator in the linear transformation for prediction. Despite its popularity, estimation of ß can be difficult for high dimensional problems. Commonly used assumptions in the literature include that the signal of coefficients ß is sparse and predictors are weakly correlated. These assumptions, however, may not be easily verified, and can be violated in practice. When ß is non-sparse or predictors are strongly correlated, estimation of ß can be very difficult. In this paper, we propose a novel pointwise estimator for linear transformations of ß . This new estimator greatly relaxes the common assumptions for high dimensional problems, and is adaptive to the degree of sparsity of ß and strength of correlations among the predictors. In particular, ß can be sparse or non-sparse and predictors can be strongly or weakly correlated. The proposed method is simple for implementation. Numerical and theoretical results demonstrate the competitive advantages of the proposed method for a wide range of problems.

5.
Front Neuroinform ; 18: 1430987, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39315000

RESUMO

Recent advancements in neuroimaging have led to greater data sharing among the scientific community. However, institutions frequently maintain control over their data, citing concerns related to research culture, privacy, and accountability. This creates a demand for innovative tools capable of analyzing amalgamated datasets without the need to transfer actual data between entities. To address this challenge, we propose a decentralized sparse federated learning (FL) strategy. This approach emphasizes local training of sparse models to facilitate efficient communication within such frameworks. By capitalizing on model sparsity and selectively sharing parameters between client sites during the training phase, our method significantly lowers communication overheads. This advantage becomes increasingly pronounced when dealing with larger models and accommodating the diverse resource capabilities of various sites. We demonstrate the effectiveness of our approach through the application to the Adolescent Brain Cognitive Development (ABCD) dataset.

6.
Int J Control ; 97(8): 1770-1779, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39310798

RESUMO

Quantization is the process of mapping an input signal from an infinite continuous set to a countable set with a finite number of elements. It is a non-linear irreversible process, which makes the traditional methods of system identification no longer applicable. In this work, we propose a method for parsimonious linear time invariant system identification when only quantized observations, discerned from noisy data, are available. More formally, given a priori information on the system, represented by a compact set containing the poles of the system, and quantized realizations, our algorithm aims at identifying the least order system that is compatible with the available information. The proposed approach takes also into account that the available data can be subject to fragmentation. Our proposed algorithm relies on an ADMM approach to solve a ℓ p , 0 < p < 1 , quasi-norm objective problem. Numerical results highlight the performance of the proposed approach when compared to the ℓ 1 minimization in terms of the sparsity of the induced solution.

7.
Entropy (Basel) ; 26(9)2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39330127

RESUMO

Variable selection methods have been extensively developed for and applied to cancer genomics data to identify important omics features associated with complex disease traits, including cancer outcomes. However, the reliability and reproducibility of the findings are in question if valid inferential procedures are not available to quantify the uncertainty of the findings. In this article, we provide a gentle but systematic review of high-dimensional frequentist and Bayesian inferential tools under sparse models which can yield uncertainty quantification measures, including confidence (or Bayesian credible) intervals, p values and false discovery rates (FDR). Connections in high-dimensional inferences between the two realms have been fully exploited under the "unpenalized loss function + penalty term" formulation for regularization methods and the "likelihood function × shrinkage prior" framework for regularized Bayesian analysis. In particular, we advocate for robust Bayesian variable selection in cancer genomics studies due to its ability to accommodate disease heterogeneity in the form of heavy-tailed errors and structured sparsity while providing valid statistical inference. The numerical results show that robust Bayesian analysis incorporating exact sparsity has yielded not only superior estimation and identification results but also valid Bayesian credible intervals under nominal coverage probabilities compared with alternative methods, especially in the presence of heavy-tailed model errors and outliers.

8.
Neural Netw ; 180: 106664, 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39217863

RESUMO

Complex-valued convolutional neural networks (CVCNNs) have been demonstrated effectiveness in classifying complex signals and synthetic aperture radar (SAR) images. However, due to the introduction of complex-valued parameters, CVCNNs tend to become redundant with heavy floating-point operations. Model sparsity is emerged as an efficient method of removing the redundancy without much loss of performance. Currently, there are few studies on the sparsity problem of CVCNNs. Therefore, a complex-valued soft-log threshold reweighting (CV-SLTR) algorithm is proposed for the design of sparse CVCNN to reduce the number of weight parameters and simplify the structure of CVCNN. On one hand, considering the difference between complex and real numbers, we redefine and derive the complex-valued log-sum threshold method. On the other hand, by considering the distinctive characteristics of complex-valued convolutional (CConv) layers and complex-valued fully connected (CFC) layers of CVCNNs, the complex-valued soft and log-sum threshold methods are respectively developed to prune the weights of different layers during the forward propagation, and the sparsity thresholds are optimized during the backward propagation by inducing a sparsity budget. Furthermore, different optimizers can be integrated with CV-SLTR. When stochastic gradient descent (SGD) is used, the convergence of CV-SLTR is proved if Lipschitzian continuity is satisfied. Experiments on the RadioML 2016.10A and S1SLC-CVDL datasets show that the proposed algorithm is efficient for the sparsity of CVCNNs. It is worth noting that the proposed algorithm has fast sparsity speed while maintaining high classification accuracy. These demonstrate the feasibility and potential of the CV-SLTR algorithm.

9.
Open Res Eur ; 4: 29, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39219787

RESUMO

Background: Identifying stars belonging to different classes is vital in order to build up statistical samples of different phases and pathways of stellar evolution. In the era of surveys covering billions of stars, an automated method of identifying these classes becomes necessary. Methods: Many classes of stars are identified based on their emitted spectra. In this paper, we use a combination of the multi-class multi-label Machine Learning (ML) method XGBoost and the PySSED spectral-energy-distribution fitting algorithm to classify stars into nine different classes, based on their photometric data. The classifier is trained on subsets of the SIMBAD database. Particular challenges are the very high sparsity (large fraction of missing values) of the underlying data as well as the high class imbalance. We discuss the different variables available, such as photometric measurements on the one hand, and indirect predictors such as Galactic position on the other hand. Results: We show the difference in performance when excluding certain variables, and discuss in which contexts which of the variables should be used. Finally, we show that increasing the number of samples of a particular type of star significantly increases the performance of the model for that particular type, while having little to no impact on other types. The accuracy of the main classifier is ∼0.7 with a macro F1 score of 0.61. Conclusions: While the current accuracy of the classifier is not high enough to be reliably used in stellar classification, this work is an initial proof of feasibility for using ML to classify stars based on photometry.


Astronomy is at the forefront of the 'Big Data' regime, with telescopes collecting increasingly large volumes of data. The tools astronomers use to analyse and draw conclusions from these data need to be able to keep up, with machine learning providing many of the solutions. Being able to classify different astronomical objects by type helps to disentangle the astrophysics making them unique, offering new insights into how the Universe works. Here, we present how machine learning can be used to classify different kinds of stars, in order to augment large databases of the sky. This will allow astronomers to more easily extract the data they need to perform their scientific analyses.

10.
Neural Netw ; 180: 106633, 2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39208461

RESUMO

In the construction process of radial basis function (RBF) networks, two common crucial issues arise: the selection of RBF centers and the effective utilization of the given source without encountering the overfitting problem. Another important issue is the fault tolerant capability. That is, when noise or faults exist in a trained network, it is crucial that the network's performance does not undergo significant deterioration or decrease. However, without employing a fault tolerant procedure, a trained RBF network may exhibit significantly poor performance. Unfortunately, most existing algorithms are unable to simultaneously address all of the aforementioned issues. This paper proposes fault tolerant training algorithms that can simultaneously select RBF nodes and train RBF output weights. Additionally, our algorithms can directly control the number of RBF nodes in an explicit manner, eliminating the need for a time-consuming procedure to tune the regularization parameter and achieve the target RBF network size. Based on simulation results, our algorithms demonstrate improved test set performance when more RBF nodes are used, effectively utilizing the given source without encountering the overfitting problem. This paper first defines a fault tolerant objective function, which includes a term to suppress the effects of weight faults and weight noise. This term also prevents the issue of overfitting, resulting in better test set performance when more RBF nodes are utilized. With the defined objective function, the training process is designed to solve a generalized M-sparse problem by incorporating an ℓ0-norm constraint. The ℓ0-norm constraint allows us to directly and explicitly control the number of RBF nodes. To address the generalized M-sparse problem, we introduce the noise-resistant iterative hard thresholding (NR-IHT) algorithm. The convergence properties of the NR-IHT algorithm are subsequently discussed theoretically. To further enhance performance, we incorporate the momentum concept into the NR-IHT algorithm, referring to the modified version as "NR-IHT-Mom". Simulation results show that both the NR-IHT algorithm and the NR-IHT-Mom algorithm outperform several state-of-the-art comparison algorithms.

11.
Neural Netw ; 179: 106534, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39059046

RESUMO

As Deep Neural Networks (DNNs) continue to grow in complexity and size, leading to a substantial computational burden, weight pruning techniques have emerged as an effective solution. This paper presents a novel method for dynamic regularization-based pruning, which incorporates the Alternating Direction Method of Multipliers (ADMM). Unlike conventional methods that employ simple and abrupt threshold processing, the proposed method introduces a reweighting mechanism to assign importance to the weights in DNNs. Compared to other ADMM-based methods, the new method not only achieves higher accuracy but also saves considerable time thanks to the reduced number of necessary hyperparameters. The method is evaluated on multiple architectures, including LeNet-5, ResNet-32, ResNet-56, and ResNet-50, using the MNIST, CIFAR-10, and ImageNet datasets, respectively. Experimental results demonstrate its superior performance in terms of compression ratios and accuracy compared to state-of-the-art pruning methods. In particular, on the LeNet-5 model for the MNIST dataset, it achieves compression ratios of 355.9× with a slight improvement in accuracy; on the ResNet-50 model trained with the ImageNet dataset, it achieves compression ratios of 4.24× without sacrificing accuracy.


Assuntos
Redes Neurais de Computação , Algoritmos , Humanos , Aprendizado Profundo
12.
J Am Stat Assoc ; 119(546): 1461-1472, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38974186

RESUMO

We introduce and study two new inferential challenges associated with the sequential detection of change in a high-dimensional mean vector. First, we seek a confidence interval for the changepoint, and second, we estimate the set of indices of coordinates in which the mean changes. We propose an online algorithm that produces an interval with guaranteed nominal coverage, and whose length is, with high probability, of the same order as the average detection delay, up to a logarithmic factor. The corresponding support estimate enjoys control of both false negatives and false positives. Simulations confirm the effectiveness of our methodology, and we also illustrate its applicability on the U.S. excess deaths data from 2017 to 2020. The supplementary material, which contains the proofs of our theoretical results, is available online.

13.
J Bioinform Comput Biol ; 22(3): 2450007, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-39036848

RESUMO

For sequencing-based spatial transcriptomics data, the gene-spot count matrix is highly sparse. This feature is similar to scRNA-seq. The goal of this paper is to identify whether there exist genes that are frequently under-detected in Visium compared to bulk RNA-seq, and the underlying potential mechanism of under-detection in Visium. We collected paired Visium and bulk RNA-seq data for 28 human samples and 19 mouse samples, which covered diverse tissue sources. We compared the two data types and observed that there indeed exists a collection of genes frequently under-detected in Visium compared to bulk RNA-seq. We performed a motif search to examine the last 350 bp of the frequently under-detected genes, and we observed that the poly (T) motif was significantly enriched in genes identified from both human and mouse data, which matches with our previous finding about frequently under-detected genes in scRNA-seq. We hypothesized that the poly (T) motif may be able to form a hairpin structure with the poly (A) tails of their mRNA transcripts, making it difficult for their mRNA transcripts to be captured during Visium library preparation.


Assuntos
Perfilação da Expressão Gênica , Transcriptoma , Camundongos , Humanos , Animais , Perfilação da Expressão Gênica/métodos , RNA Mensageiro/genética , RNA Mensageiro/metabolismo , Análise de Sequência de RNA/métodos , Análise de Sequência de RNA/estatística & dados numéricos , RNA-Seq/métodos , Biologia Computacional/métodos , Motivos de Nucleotídeos
14.
Sensors (Basel) ; 24(13)2024 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-39001022

RESUMO

As higher spatiotemporal resolution tactile sensing systems are being developed for prosthetics, wearables, and other biomedical applications, they demand faster sampling rates and generate larger data streams. Sparsifying transformations can alleviate these requirements by enabling compressive sampling and efficient data storage through compression. However, research on the best sparsifying transforms for tactile interactions is lagging. In this work we construct a library of orthogonal and biorthogonal wavelet transforms as sparsifying transforms for tactile interactions and compare their tradeoffs in compression and sparsity. We tested the sparsifying transforms on a publicly available high-density tactile object grasping dataset (548 sensor tactile glove, grasping 26 objects). In addition, we investigated which dimension wavelet transform-1D, 2D, or 3D-would best compress these tactile interactions. Our results show that wavelet transforms are highly efficient at compressing tactile data and can lead to very sparse and compact tactile representations. Additionally, our results show that 1D transforms achieve the sparsest representations, followed by 3D, and lastly 2D. Overall, the best wavelet for coarse approximation is Symlets 4 evaluated temporally which can sparsify to 0.5% sparsity and compress 10-bit tactile data to an average of 0.04 bits per pixel. Future studies can leverage the results of this paper to assist in the compressive sampling of large tactile arrays and free up computational resources for real-time processing on computationally constrained mobile platforms like neuroprosthetics.

15.
J Med Imaging (Bellingham) ; 11(3): 033504, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38938501

RESUMO

Purpose: We present a method that combines compressed sensing with parallel imaging that takes advantage of the structure of the sparsifying transformation. Approach: Previous work has combined compressed sensing with parallel imaging using model-based reconstruction but without taking advantage of the structured sparsity. Blurry images for each coil are reconstructed from the fully sampled center region. The optimization problem of compressed sensing is modified to take these blurry images into account, and it is solved to estimate the missing details. Results: Using data of brain, ankle, and shoulder anatomies, the combination of compressed sensing with structured sparsity and parallel imaging reconstructs an image with a lower relative error than does sparse SENSE or L1 ESPIRiT, which do not use structured sparsity. Conclusions: Taking advantage of structured sparsity improves the image quality for a given amount of data as long as a fully sampled region centered on the zero frequency of the appropriate size is acquired.

16.
Ultrasonics ; 142: 107390, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38945018

RESUMO

Standard structural health monitoring techniques face well-known difficulties for comprehensive defect diagnosis in real-world structures that have structural, material, or geometric complexity. This motivates the exploration of machine-learning-based structural health monitoring methods in complex structures. However, creating sufficient training data sets with various defects is an ongoing challenge for data-driven machine (deep) learning algorithms. The ability to transfer the knowledge of a trained neural network from one component to another or to other sections of the same component would drastically reduce the required training data set. Also, it would facilitate computationally inexpensive machine learning based inspection systems. In this work, a machine-learning-based multi-level damage characterization is demonstrated with the ability to transfer trained knowledge within the sparse sensor network. A novel network spatial assistance and an adaptive convolution technique are proposed for efficient knowledge transfer within the deep learning algorithm. Proposed structural health monitoring method is experimentally evaluated on an aluminum plate with artificially induced defects. It was observed that the method improves the performance of knowledge transferred damage characterization by 50 % during localization and 24 % during severity assessment. Further, experiments using time windows with and without multiple edge reflections are studied. Results reveal that multiply scattered waves contain rich and deterministic defect signatures that can be mined using deep learning neural networks, improving the accuracy of both identification and quantification. In the case of a fixed sensor network, using multiply scattered waves shows 100 % prediction accuracy at all levels of damage characterization.

17.
JMIR Med Inform ; 12: e50209, 2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38896468

RESUMO

BACKGROUND: Diagnostic errors pose significant health risks and contribute to patient mortality. With the growing accessibility of electronic health records, machine learning models offer a promising avenue for enhancing diagnosis quality. Current research has primarily focused on a limited set of diseases with ample training data, neglecting diagnostic scenarios with limited data availability. OBJECTIVE: This study aims to develop an information retrieval (IR)-based framework that accommodates data sparsity to facilitate broader diagnostic decision support. METHODS: We introduced an IR-based diagnostic decision support framework called CliniqIR. It uses clinical text records, the Unified Medical Language System Metathesaurus, and 33 million PubMed abstracts to classify a broad spectrum of diagnoses independent of training data availability. CliniqIR is designed to be compatible with any IR framework. Therefore, we implemented it using both dense and sparse retrieval approaches. We compared CliniqIR's performance to that of pretrained clinical transformer models such as Clinical Bidirectional Encoder Representations from Transformers (ClinicalBERT) in supervised and zero-shot settings. Subsequently, we combined the strength of supervised fine-tuned ClinicalBERT and CliniqIR to build an ensemble framework that delivers state-of-the-art diagnostic predictions. RESULTS: On a complex diagnosis data set (DC3) without any training data, CliniqIR models returned the correct diagnosis within their top 3 predictions. On the Medical Information Mart for Intensive Care III data set, CliniqIR models surpassed ClinicalBERT in predicting diagnoses with <5 training samples by an average difference in mean reciprocal rank of 0.10. In a zero-shot setting where models received no disease-specific training, CliniqIR still outperformed the pretrained transformer models with a greater mean reciprocal rank of at least 0.10. Furthermore, in most conditions, our ensemble framework surpassed the performance of its individual components, demonstrating its enhanced ability to make precise diagnostic predictions. CONCLUSIONS: Our experiments highlight the importance of IR in leveraging unstructured knowledge resources to identify infrequently encountered diagnoses. In addition, our ensemble framework benefits from combining the complementary strengths of the supervised and retrieval-based models to diagnose a broad spectrum of diseases.

18.
J Appl Stat ; 51(8): 1497-1523, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38863802

RESUMO

Plant breeders want to develop cultivars that outperform existing genotypes. Some characteristics (here 'main traits') of these cultivars are categorical and difficult to measure directly. It is important to predict the main trait of newly developed genotypes accurately. In addition to marker data, breeding programs often have information on secondary traits (or 'phenotypes') that are easy to measure. Our goal is to improve prediction of main traits with interpretable relations by combining the two data types using variable selection techniques. However, the genomic characteristics can overwhelm the set of secondary traits, so a standard technique may fail to select any phenotypic variables. We develop a new statistical technique that ensures appropriate representation from both the secondary traits and the genotypic variables for optimal prediction. When two data types (markers and secondary traits) are available, we achieve improved prediction of a binary trait by two steps that are designed to ensure that a significant intrinsic effect of a phenotype is incorporated in the relation before accounting for extra effects of genotypes. First, we sparsely regress the secondary traits on the markers and replace the secondary traits by their residuals to obtain the effects of phenotypic variables as adjusted by the genotypic variables. Then, we develop a sparse logistic classifier using the markers and residuals so that the adjusted phenotypes may be selected first to avoid being overwhelmed by the genotypic variables due to their numerical advantage. This classifier uses forward selection aided by a penalty term and can be computed effectively by a technique called the one-pass method. It compares favorably with other classifiers on simulated and real data.

19.
Phys Med Biol ; 69(14)2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-38917824

RESUMO

Objective.A model-based alternating reconstruction coupling fitting, termed Model-based Alternating Reconstruction COupling fitting (MARCO), is proposed for accurate and fast magnetic resonance parameter mapping.Approach.MARCO utilizes the signal model as a regularization by minimizing the bias between the image series and the signal produced by the suitable signal model based on iteratively updated parameter maps when reconstructing. The technique can incorporate prior knowledge of both image series and parameters by adding sparsity constraints. The optimization problem is decomposed into three subproblems and solved through three alternating steps involving reconstruction and nonlinear least-square fitting, which can produce both contrast-weighted images and parameter maps simultaneously.Main results.The algorithm is applied toT2mapping with extended phase graph algorithm integrated and validated on undersampled multi-echo spin-echo data from both phantom and in vivo sources. Compared with traditional compressed sensing and model-based methods, the proposed approach yields more accurateT2maps with more details at high acceleration factors.Significance.The proposed method provides a basic framework for quantitative MR relaxometry, theoretically applicable to all quantitative MR relaxometry. It has the potential to improve the diagnostic utility of quantitative imaging techniques.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Humanos , Fatores de Tempo , Encéfalo/diagnóstico por imagem
20.
Comput Methods Programs Biomed ; 251: 108212, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38754327

RESUMO

BACKGROUND AND OBJECTIVE: There is a rising interest in exploiting aggregate information from external medical studies to enhance the statistical analysis of a modestly sized internal dataset. Currently available software packages for analyzing survival data with a cure fraction ignore the potentially available auxiliary information. This paper aims at filling this gap by developing a new R package CureAuxSP that can include subgroup survival probabilities extracted outside into an interested internal survival dataset. METHODS: The newly developed R package CureAuxSP provides an efficient approach for information synthesis under the mixture cure models, including Cox proportional hazards mixture cure model and the accelerated failure time mixture cure model as special cases. It focuses on synthesizing subgroup survival probabilities at multiple time points and the underlying method development lies in the control variate technique. Evaluation of homogeneity assumption based on a test statistic can be automatically carried out by our package and if heterogeneity does exist, the original outputs can be further refined adaptively. RESULTS: The R package CureAuxSP provides a main function SMC.AxuSP() that helps us adaptively incorporate external subgroup survival probabilities into the analysis of an internal survival data. We also provide another function Print.SMC.AuxSP() for printing the results with a better presentation. Detailed usages are described, and implementations are illustrated with numerical examples, including a simulated dataset with a well-designed data generating process and a real breast cancer dataset. Substantial efficiency gain can be observed by our results. CONCLUSIONS: Our R package CureAuxSP can make the wide applications of utilizing auxiliary information possible. It is anticipated that the performance of mixture cure models can be improved for the survival data with a cure fraction, especially for those with small sample sizes.


Assuntos
Probabilidade , Modelos de Riscos Proporcionais , Software , Humanos , Análise de Sobrevida , Modelos Estatísticos , Simulação por Computador , Algoritmos , Neoplasias da Mama/mortalidade , Neoplasias da Mama/terapia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA