Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.411
Filtrar
Más filtros

País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Brief Bioinform ; 25(4)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38856167

RESUMEN

The genome-wide single-cell chromosome conformation capture technique, i.e. single-cell Hi-C (ScHi-C), was recently developed to interrogate the conformation of the genome of individual cells. However, single-cell Hi-C data are much sparser than bulk Hi-C data of a population of cells, and noise in single-cell Hi-C makes it difficult to apply and analyze them in biological research. Here, we developed the first generative diffusion models (HiCDiff) to denoise single-cell Hi-C data in the form of chromosomal contact matrices. HiCDiff uses a deep residual network to remove the noise in the reverse process of diffusion and can be trained in both unsupervised and supervised learning modes. Benchmarked on several single-cell Hi-C test datasets, the diffusion models substantially remove the noise in single-cell Hi-C data. The unsupervised HiCDiff outperforms most supervised non-diffusion deep learning methods and achieves the performance comparable to the state-of-the-art supervised deep learning method in terms of multiple metrics, demonstrating that diffusion models are a useful approach to denoising single-cell Hi-C data. Moreover, its good performance holds on denoising bulk Hi-C data.


Asunto(s)
Análisis de la Célula Individual , Análisis de la Célula Individual/métodos , Humanos , Biología Computacional/métodos , Aprendizaje Profundo , Algoritmos
2.
Brief Bioinform ; 25(2)2024 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-38493338

RESUMEN

In recent years, there has been a growing trend in the realm of parallel clustering analysis for single-cell RNA-seq (scRNA) and single-cell Assay of Transposase Accessible Chromatin (scATAC) data. However, prevailing methods often treat these two data modalities as equals, neglecting the fact that the scRNA mode holds significantly richer information compared to the scATAC. This disregard hinders the model benefits from the insights derived from multiple modalities, compromising the overall clustering performance. To this end, we propose an effective multi-modal clustering model scEMC for parallel scRNA and Assay of Transposase Accessible Chromatin data. Concretely, we have devised a skip aggregation network to simultaneously learn global structural information among cells and integrate data from diverse modalities. To safeguard the quality of integrated cell representation against the influence stemming from sparse scATAC data, we connect the scRNA data with the aggregated representation via skip connection. Moreover, to effectively fit the real distribution of cells, we introduced a Zero Inflated Negative Binomial-based denoising autoencoder that accommodates corrupted data containing synthetic noise, concurrently integrating a joint optimization module that employs multiple losses. Extensive experiments serve to underscore the effectiveness of our model. This work contributes significantly to the ongoing exploration of cell subpopulations and tumor microenvironments, and the code of our work will be public at https://github.com/DayuHuu/scEMC.


Asunto(s)
Cromatina , ARN Citoplasmático Pequeño , Análisis de Expresión Génica de una Sola Célula , Análisis por Conglomerados , Aprendizaje , ARN Citoplasmático Pequeño/genética , Transposasas , Análisis de Secuencia de ARN , Perfilación de la Expresión Génica
3.
Brief Bioinform ; 24(2)2023 03 19.
Artículo en Inglés | MEDLINE | ID: mdl-36653906

RESUMEN

Spatially resolved transcriptomics technologies enable comprehensive measurement of gene expression patterns in the context of intact tissues. However, existing technologies suffer from either low resolution or shallow sequencing depth. Here, we present DIST, a deep learning-based method that imputes the gene expression profiles on unmeasured locations and enhances the gene expression for both original measured spots and imputed spots by self-supervised learning and transfer learning. We evaluate the performance of DIST for imputation, clustering, differential expression analysis and functional enrichment analysis. The results show that DIST can impute the gene expression accurately, enhance the gene expression for low-quality data, help detect more biological meaningful differentially expressed genes and pathways, therefore allow for deeper insights into the biological processes.


Asunto(s)
Aprendizaje Profundo , Transcriptoma , Perfilación de la Expresión Génica/métodos , Análisis por Conglomerados
4.
Brief Bioinform ; 24(3)2023 05 19.
Artículo en Inglés | MEDLINE | ID: mdl-37096633

RESUMEN

In cryogenic electron microscopy (cryo-EM) single particle analysis (SPA), high-resolution three-dimensional structures of biological macromolecules are determined by iteratively aligning and averaging a large number of two-dimensional projections of molecules. Since the correlation measures are sensitive to the signal-to-noise ratio, various parameter estimation steps in SPA will be disturbed by the high-intensity noise in cryo-EM. However, denoising algorithms tend to damage high frequencies and suppress mid- and high-frequency contrast of micrographs, which exactly the precise parameter estimation relies on, therefore, limiting their application in SPA. In this study, we suggest combining a cryo-EM image processing pipeline with denoising and maximizing the signal's contribution in various parameter estimation steps. To solve the inherent flaws of denoising algorithms, we design an algorithm named MScale to correct the amplitude distortion caused by denoising and propose a new orientation determination strategy to compensate for the high-frequency loss. In the experiments on several real datasets, the denoised particles are successfully applied in the class assignment estimation and orientation determination tasks, ultimately enhancing the quality of biomacromolecule reconstruction. The case study on classification indicates that our strategy not only improves the resolution of difficult classes (up to 5 Å) but also resolves an additional class. In the case study on orientation determination, our strategy improves the resolution of the final reconstructed density map by 0.34 Å compared with conventional strategy. The code is available at https://github.com/zhanghui186/Mscale.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen Individual de Molécula , Microscopía por Crioelectrón/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Relación Señal-Ruido
5.
Brief Bioinform ; 24(3)2023 05 19.
Artículo en Inglés | MEDLINE | ID: mdl-36971393

RESUMEN

MOTIVATION: A large number of studies have shown that circular RNA (circRNA) affects biological processes by competitively binding miRNA, providing a new perspective for the diagnosis, and treatment of human diseases. Therefore, exploring the potential circRNA-miRNA interactions (CMIs) is an important and urgent task at present. Although some computational methods have been tried, their performance is limited by the incompleteness of feature extraction in sparse networks and the low computational efficiency of lengthy data. RESULTS: In this paper, we proposed JSNDCMI, which combines the multi-structure feature extraction framework and Denoising Autoencoder (DAE) to meet the challenge of CMI prediction in sparse networks. In detail, JSNDCMI integrates functional similarity and local topological structure similarity in the CMI network through the multi-structure feature extraction framework, then forces the neural network to learn the robust representation of features through DAE and finally uses the Gradient Boosting Decision Tree classifier to predict the potential CMIs. JSNDCMI produces the best performance in the 5-fold cross-validation of all data sets. In the case study, seven of the top 10 CMIs with the highest score were verified in PubMed. AVAILABILITY: The data and source code can be found at https://github.com/1axin/JSNDCMI.


Asunto(s)
MicroARNs , Humanos , MicroARNs/genética , ARN Circular , Redes Neurales de la Computación , Programas Informáticos , Biología Computacional/métodos
6.
Brief Bioinform ; 24(1)2023 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-36631401

RESUMEN

The advances in single-cell ribonucleic acid sequencing (scRNA-seq) allow researchers to explore cellular heterogeneity and human diseases at cell resolution. Cell clustering is a prerequisite in scRNA-seq analysis since it can recognize cell identities. However, the high dimensionality, noises and significant sparsity of scRNA-seq data have made it a big challenge. Although many methods have emerged, they still fail to fully explore the intrinsic properties of cells and the relationship among cells, which seriously affects the downstream clustering performance. Here, we propose a new deep contrastive clustering algorithm called scDCCA. It integrates a denoising auto-encoder and a dual contrastive learning module into a deep clustering framework to extract valuable features and realize cell clustering. Specifically, to better characterize and learn data representations robustly, scDCCA utilizes a denoising Zero-Inflated Negative Binomial model-based auto-encoder to extract low-dimensional features. Meanwhile, scDCCA incorporates a dual contrastive learning module to capture the pairwise proximity of cells. By increasing the similarities between positive pairs and the differences between negative ones, the contrasts at both the instance and the cluster level help the model learn more discriminative features and achieve better cell segregation. Furthermore, scDCCA joins feature learning with clustering, which realizes representation learning and cell clustering in an end-to-end manner. Experimental results of 14 real datasets validate that scDCCA outperforms eight state-of-the-art methods in terms of accuracy, generalizability, scalability and efficiency. Cell visualization and biological analysis demonstrate that scDCCA significantly improves clustering and facilitates downstream analysis for scRNA-seq data. The code is available at https://github.com/WJ319/scDCCA.


Asunto(s)
Perfilación de la Expresión Génica , Análisis de Expresión Génica de una Sola Célula , Humanos , Perfilación de la Expresión Génica/métodos , Análisis de Secuencia de ARN/métodos , Análisis de la Célula Individual/métodos , Algoritmos , Análisis por Conglomerados
7.
Brief Bioinform ; 24(5)2023 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-37594302

RESUMEN

The availability of high-throughput sequencing data creates opportunities to comprehensively understand human diseases as well as challenges to train machine learning models using such high dimensions of data. Here, we propose a denoised multi-omics integration framework, which contains a distribution-based feature denoising algorithm, Feature Selection with Distribution (FSD), for dimension reduction and a multi-omics integration framework, Attention Multi-Omics Integration (AttentionMOI) to predict cancer prognosis and identify cancer subtypes. We demonstrated that FSD improved model performance either using single omic data or multi-omics data in 15 The Cancer Genome Atlas Program (TCGA) cancers for survival prediction and kidney cancer subtype identification. And our integration framework AttentionMOI outperformed machine learning models and current multi-omics integration algorithms with high dimensions of features. Furthermore, FSD identified features that were associated to cancer prognosis and could be considered as biomarkers.


Asunto(s)
Genómica , Neoplasias , Humanos , Genómica/métodos , Multiómica , Neoplasias/genética , Algoritmos
8.
Hum Brain Mapp ; 45(7): e26697, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38726888

RESUMEN

Diffusion MRI with free gradient waveforms, combined with simultaneous relaxation encoding, referred to as multidimensional MRI (MD-MRI), offers microstructural specificity in complex biological tissue. This approach delivers intravoxel information about the microstructure, local chemical composition, and importantly, how these properties are coupled within heterogeneous tissue containing multiple microenvironments. Recent theoretical advances incorporated diffusion time dependency and integrated MD-MRI with concepts from oscillating gradients. This framework probes the diffusion frequency, ω $$ \omega $$ , in addition to the diffusion tensor, D $$ \mathbf{D} $$ , and relaxation, R 1 $$ {R}_1 $$ , R 2 $$ {R}_2 $$ , correlations. A D ω - R 1 - R 2 $$ \mathbf{D}\left(\omega \right)-{R}_1-{R}_2 $$ clinical imaging protocol was then introduced, with limited brain coverage and 3 mm3 voxel size, which hinder brain segmentation and future cohort studies. In this study, we introduce an efficient, sparse in vivo MD-MRI acquisition protocol providing whole brain coverage at 2 mm3 voxel size. We demonstrate its feasibility and robustness using a well-defined phantom and repeated scans of five healthy individuals. Additionally, we test different denoising strategies to address the sparse nature of this protocol, and show that efficient MD-MRI encoding design demands a nuanced denoising approach. The MD-MRI framework provides rich information that allows resolving the diffusion frequency dependence into intravoxel components based on their D ω - R 1 - R 2 $$ \mathbf{D}\left(\omega \right)-{R}_1-{R}_2 $$ distribution, enabling the creation of microstructure-specific maps in the human brain. Our results encourage the broader adoption and use of this new imaging approach for characterizing healthy and pathological tissues.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Adulto , Procesamiento de Imagen Asistido por Computador/métodos , Imagen de Difusión por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Masculino , Femenino , Imagen de Difusión Tensora/métodos , Adulto Joven
9.
Brief Bioinform ; 23(1)2022 01 17.
Artículo en Inglés | MEDLINE | ID: mdl-34607360

RESUMEN

Learning node representation is a fundamental problem in biological network analysis, as compact representation features reveal complicated network structures and carry useful information for downstream tasks such as link prediction and node classification. Recently, multiple networks that profile objects from different aspects are increasingly accumulated, providing the opportunity to learn objects from multiple perspectives. However, the complex common and specific information across different networks pose challenges to node representation methods. Moreover, ubiquitous noise in networks calls for more robust representation. To deal with these problems, we present a representation learning method for multiple biological networks. First, we accommodate the noise and spurious edges in networks using denoised diffusion, providing robust connectivity structures for the subsequent representation learning. Then, we introduce a graph regularized integration model to combine refined networks and compute common representation features. By using the regularized decomposition technique, the proposed model can effectively preserve the common structural property of different networks and simultaneously accommodate their specific information, leading to a consistent representation. A simulation study shows the superiority of the proposed method on different levels of noisy networks. Three network-based inference tasks, including drug-target interaction prediction, gene function identification and fine-grained species categorization, are conducted using representation features learned from our method. Biological networks at different scales and levels of sparsity are involved. Experimental results on real-world data show that the proposed method has robust performance compared with alternatives. Overall, by eliminating noise and integrating effectively, the proposed method is able to learn useful representations from multiple biological networks.


Asunto(s)
Aprendizaje , Redes Neurales de la Computación , Simulación por Computador , Difusión
10.
Brief Bioinform ; 23(4)2022 07 18.
Artículo en Inglés | MEDLINE | ID: mdl-35821114

RESUMEN

Developments of single-cell RNA sequencing (scRNA-seq) technologies have enabled biological discoveries at the single-cell resolution with high throughput. However, large scRNA-seq datasets always suffer from massive technical noises, including batch effects and dropouts, and the dropout is often shown to be batch-dependent. Most existing methods only address one of the problems, and we show that the popularly used methods failed in trading off batch effect correction and dropout imputation. Here, inspired by the idea of causal inference, we propose a novel propensity score matching method for scRNA-seq data (scPSM) by borrowing information and taking the weighted average from similar cells in the deep sequenced batch, which simultaneously removes the batch effect, imputes dropout and denoises data in the entire gene expression space. The proposed method is testified on two simulation datasets and a variety of real scRNA-seq datasets, and the results show that scPSM is superior to other state-of-the-art methods. First, scPSM improves clustering accuracy and mixes cells of the same type, suggesting its ability to keep cell type separation while correcting for batch. Besides, using the scPSM-integrated data as input yields results free of batch effects or dropouts in the differential expression analysis. Moreover, scPSM not only achieves ideal denoising but also preserves real biological structure for downstream gene-based analyses. Furthermore, scPSM is robust to hyperparameters and small datasets with a few cells but enormous genes. Comprehensive evaluations demonstrate that scPSM jointly provides desirable batch effect correction, imputation and denoising for recovering the biologically meaningful expression in scRNA-seq data.


Asunto(s)
Perfilación de la Expresión Génica , Análisis de la Célula Individual , Análisis por Conglomerados , Puntaje de Propensión , Análisis de Secuencia de ARN/métodos , Análisis de la Célula Individual/métodos , Programas Informáticos
11.
Magn Reson Med ; 92(4): 1649-1657, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38725132

RESUMEN

PURPOSE: To investigate the feasibility of diffusion tensor brain imaging at 0.55T with comparisons against 3T. METHODS: Diffusion tensor imaging data with 2 mm isotropic resolution was acquired on a cohort of five healthy subjects using both 0.55T and 3T scanners. The signal-to-noise ratio (SNR) of the 0.55T data was improved using a previous SNR-enhancing joint reconstruction method that jointly reconstructs the entire set of diffusion weighted images from k-space using shared-edge constraints. Quantitative diffusion tensor parameters were estimated and compared across field strengths. We also performed a test-retest assessment of repeatability at each field strength. RESULTS: After applying SNR-enhancing joint reconstruction, the diffusion tensor parameters obtained from 0.55T data were strongly correlated ( R 2 ≥ 0 . 70 $$ {R}^2\ge 0.70 $$ ) with those obtained from 3T data. Test-retest analysis showed that SNR-enhancing reconstruction improved the repeatability of the 0.55T diffusion tensor parameters. CONCLUSION: High-resolution in vivo diffusion MRI of the human brain is feasible at 0.55T when appropriate noise-mitigation strategies are applied.


Asunto(s)
Encéfalo , Imagen de Difusión Tensora , Estudios de Factibilidad , Procesamiento de Imagen Asistido por Computador , Relación Señal-Ruido , Humanos , Encéfalo/diagnóstico por imagen , Imagen de Difusión Tensora/métodos , Masculino , Adulto , Reproducibilidad de los Resultados , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Voluntarios Sanos
12.
Magn Reson Med ; 92(5): 1980-1994, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38934408

RESUMEN

PURPOSE: To develop a fast denoising framework for high-dimensional MRI data based on a self-supervised learning scheme, which does not require ground truth clean image. THEORY AND METHODS: Quantitative MRI faces limitations in SNR, because the variation of signal amplitude in a large set of images is the key mechanism for quantification. In addition, the complex non-linear signal models make the fitting process vulnerable to noise. To address these issues, we propose a fast deep-learning framework for denoising, which efficiently exploits the redundancy in multidimensional MRI data. A self-supervised model was designed to use only noisy images for training, bypassing the challenge of clean data paucity in clinical practice. For validation, we used two different datasets of simulated magnetization transfer contrast MR fingerprinting (MTC-MRF) dataset and in vivo DWI image dataset to show the generalizability. RESULTS: The proposed method drastically improved denoising performance in the presence of mild-to-severe noise regardless of noise distributions compared to previous methods of the BM3D, tMPPCA, and Patch2self. The improvements were even pronounced in the following quantification results from the denoised images. CONCLUSION: The proposed MD-S2S (Multidimensional-Self2Self) denoising technique could be further applied to various multi-dimensional MRI data and improve the quantification accuracy of tissue parameter maps.


Asunto(s)
Algoritmos , Encéfalo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Relación Señal-Ruido , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático Supervisado , Aprendizaje Profundo
13.
Magn Reson Med ; 91(5): 2153-2161, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38193310

RESUMEN

PURPOSE: Improving the quality and maintaining the fidelity of large coverage abdominal hyperpolarized (HP) 13 C MRI studies with a patch based global-local higher-order singular value decomposition (GL-HOVSD) spatiotemporal denoising approach. METHODS: Denoising performance was first evaluated using the simulated [1-13 C]pyruvate dynamics at different noise levels to determine optimal kglobal and klocal parameters. The GL-HOSVD spatiotemporal denoising method with the optimized parameters was then applied to two HP [1-13 C]pyruvate EPI abdominal human cohorts (n = 7 healthy volunteers and n = 8 pancreatic cancer patients). RESULTS: The parameterization of kglobal = 0.2 and klocal = 0.9 denoises abdominal HP data while retaining image fidelity when evaluated by RMSE. The kPX (conversion rate of pyruvate-to-metabolite, X = lactate or alanine) difference was shown to be <20% with respect to ground-truth metabolic conversion rates when there is adequate SNR (SNRAUC > 5) for downstream metabolites. In both human cohorts, there was a greater than nine-fold gain in peak [1-13 C]pyruvate, [1-13 C]lactate, and [1-13 C]alanine apparent SNRAUC . The improvement in metabolite SNR enabled a more robust quantification of kPL and kPA . After denoising, we observed a 2.1 ± 0.4 and 4.8 ± 2.5-fold increase in the number of voxels reliably fit across abdominal FOVs for kPL and kPA quantification maps. CONCLUSION: Spatiotemporal denoising greatly improves visualization of low SNR metabolites particularly [1-13 C]alanine and quantification of [1-13 C]pyruvate metabolism in large FOV HP 13 C MRI studies of the human abdomen.


Asunto(s)
Imagen por Resonancia Magnética , Ácido Pirúvico , Humanos , Ácido Pirúvico/metabolismo , Abdomen/diagnóstico por imagen , Lactatos , Alanina , Isótopos de Carbono/metabolismo
14.
Magn Reson Med ; 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39030953

RESUMEN

PURPOSE: To develop a SNR enhancement method for CEST imaging using a denoising convolutional autoencoder (DCAE) and compare its performance with state-of-the-art denoising methods. METHOD: The DCAE-CEST model encompasses an encoder and a decoder network. The encoder learns features from the input CEST Z-spectrum via a series of one-dimensional convolutions, nonlinearity applications, and pooling. Subsequently, the decoder reconstructs an output denoised Z-spectrum using a series of up-sampling and convolution layers. The DCAE-CEST model underwent multistage training in an environment constrained by Kullback-Leibler divergence, while ensuring data adaptability through context learning using Principal Component Analysis-processed Z-spectrum as a reference. The model was trained using simulated Z-spectra, and its performance was evaluated using both simulated data and in vivo data from an animal tumor model. Maps of amide proton transfer (APT) and nuclear Overhauser enhancement (NOE) effects were quantified using the multiple-pool Lorentzian fit, along with an apparent exchange-dependent relaxation metric. RESULTS: In digital phantom experiments, the DCAE-CEST method exhibited superior performance, surpassing existing denoising techniques, as indicated by the peak SNR and Structural Similarity Index. Additionally, in vivo data further confirm the effectiveness of the DCAE-CEST in denoising the APT and NOE maps when compared with other methods. Although no significant difference was observed in APT between tumors and normal tissues, there was a significant difference in NOE, consistent with previous findings. CONCLUSION: The DCAE-CEST can learn the most important features of the CEST Z-spectrum and provide the most effective denoising solution compared with other methods.

15.
NMR Biomed ; : e5228, 2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39169274

RESUMEN

Quantitative maps of rotating frame relaxation (RFR) time constants are sensitive and useful magnetic resonance imaging tools with which to evaluate tissue integrity in vivo. However, to date, only moderate image resolutions of 1.6 x 1.6 x 3.6 mm3 have been used for whole-brain coverage RFR mapping in humans at 3 T. For more precise morphometrical examinations, higher spatial resolutions are desirable. Towards achieving the long-term goal of increasing the spatial resolution of RFR mapping without increasing scan times, we explore the use of the recently introduced Transform domain NOise Reduction with DIstribution Corrected principal component analysis (T-NORDIC) algorithm for thermal noise reduction. RFR acquisitions at 3 T were obtained from eight healthy participants (seven males and one female) aged 52 ± 20 years, including adiabatic T1ρ, T2ρ, and nonadiabatic Relaxation Along a Fictitious Field (RAFF) in the rotating frame of rank n = 4 (RAFF4) with both 1.6 x 1.6 x 3.6 mm3 and 1.25 x 1.25 x 2 mm3 image resolutions. We compared RFR values and their confidence intervals (CIs) obtained from fitting the denoised versus nondenoised images, at both voxel and regional levels separately for each resolution and RFR metric. The comparison of metrics obtained from denoised versus nondenoised images was performed with a two-sample paired t-test and statistical significance was set at p less than 0.05 after Bonferroni correction for multiple comparisons. The use of T-NORDIC on the RFR images prior to the fitting procedure decreases the uncertainty of parameter estimation (lower CIs) at both spatial resolutions. The effect was particularly prominent at high-spatial resolution for RAFF4. Moreover, T-NORDIC did not degrade map quality, and it had minimal impact on the RFR values. Denoising RFR images with T-NORDIC improves parameter estimation while preserving the image quality and accuracy of all RFR maps, ultimately enabling high-resolution RFR mapping in scan times that are suitable for clinical settings.

16.
NMR Biomed ; : e5211, 2024 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-39041293

RESUMEN

Proton magnetic resonance spectroscopic imaging (1H-MRSI) is a powerful tool that enables the multidimensional non-invasive mapping of the neurochemical profile at high resolution over the entire brain. The constant demand for higher spatial resolution in 1H-MRSI has led to increased interest in post-processing-based denoising methods aimed at reducing noise variance. The aim of the present study was to implement two noise-reduction techniques, Marchenko-Pastur principal component analysis (MP-PCA) based denoising and low-rank total generalized variation (LR-TGV) reconstruction, and to test their potential with and impact on preclinical 14.1 T fast in vivo 1H-FID-MRSI datasets. Since there is no known ground truth for in vivo metabolite maps, additional evaluations of the performance of both noise-reduction strategies were conducted using Monte Carlo simulations. Results showed that both denoising techniques increased the apparent signal-to-noise ratio (SNR) while preserving noise properties in each spectrum for both in vivo and Monte Carlo datasets. Relative metabolite concentrations were not significantly altered by either method and brain regional differences were preserved in both synthetic and in vivo datasets. Increased precision of metabolite estimates was observed for the two methods, with inconsistencies noted for lower-concentration metabolites. Our study provided a framework for how to evaluate the performance of MP-PCA and LR-TGV methods for preclinical 1H-FID MRSI data at 14.1 T. While gains in apparent SNR and precision were observed, concentration estimations ought to be treated with care, especially for low-concentration metabolites.

17.
NMR Biomed ; 37(1): e5027, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37644611

RESUMEN

Chemical exchange saturation transfer (CEST) is a versatile technique that enables noninvasive detections of endogenous metabolites present in low concentrations in living tissue. However, CEST imaging suffers from an inherently low signal-to-noise ratio (SNR) due to the decreased water signal caused by the transfer of saturated spins. This limitation challenges the accuracy and reliability of quantification in CEST imaging. In this study, a novel spatial-spectral denoising method, called BOOST (suBspace denoising with nOnlocal lOw-rank constraint and Spectral local-smooThness regularization), was proposed to enhance the SNR of CEST images and boost quantification accuracy. More precisely, our method initially decomposes the noisy CEST images into a low-dimensional subspace by leveraging the global spectral low-rank prior. Subsequently, a spatial nonlocal self-similarity prior is applied to the subspace-based images. Simultaneously, the spectral local-smoothness property of Z-spectra is incorporated by imposing a weighted spectral total variation constraint. The efficiency and robustness of BOOST were validated in various scenarios, including numerical simulations and preclinical and clinical conditions, spanning magnetic field strengths from 3.0 to 11.7 T. The results demonstrated that BOOST outperforms state-of-the-art algorithms in terms of noise elimination. As a cost-effective and widely available post-processing method, BOOST can be easily integrated into existing CEST protocols, consequently promoting accuracy and reliability in detecting subtle CEST effects.


Asunto(s)
Algoritmos , Imagen por Resonancia Magnética , Reproducibilidad de los Resultados , Imagen por Resonancia Magnética/métodos , Relación Señal-Ruido
18.
Eur J Nucl Med Mol Imaging ; 51(2): 358-368, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37787849

RESUMEN

PURPOSE: Due to various physical degradation factors and limited counts received, PET image quality needs further improvements. The denoising diffusion probabilistic model (DDPM) was a distribution learning-based model, which tried to transform a normal distribution into a specific data distribution based on iterative refinements. In this work, we proposed and evaluated different DDPM-based methods for PET image denoising. METHODS: Under the DDPM framework, one way to perform PET image denoising was to provide the PET image and/or the prior image as the input. Another way was to supply the prior image as the network input with the PET image included in the refinement steps, which could fit for scenarios of different noise levels. 150 brain [[Formula: see text]F]FDG datasets and 140 brain [[Formula: see text]F]MK-6240 (imaging neurofibrillary tangles deposition) datasets were utilized to evaluate the proposed DDPM-based methods. RESULTS: Quantification showed that the DDPM-based frameworks with PET information included generated better results than the nonlocal mean, Unet and generative adversarial network (GAN)-based denoising methods. Adding additional MR prior in the model helped achieved better performance and further reduced the uncertainty during image denoising. Solely relying on MR prior while ignoring the PET information resulted in large bias. Regional and surface quantification showed that employing MR prior as the network input while embedding PET image as a data-consistency constraint during inference achieved the best performance. CONCLUSION: DDPM-based PET image denoising is a flexible framework, which can efficiently utilize prior information and achieve better performance than the nonlocal mean, Unet and GAN-based denoising methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía de Emisión de Positrones , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Relación Señal-Ruido , Modelos Estadísticos , Algoritmos
19.
AJR Am J Roentgenol ; 222(1): e2329765, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37646387

RESUMEN

BACKGROUND. Photon-counting detector (PCD) CT may allow lower radiation doses than used for conventional energy-integrating detector (EID) CT, with preserved image quality. OBJECTIVE. The purpose of this study was to compare PCD CT and EID CT, reconstructed with and without a denoising tool, in terms of image quality of the osseous pelvis in a phantom, with attention to low radiation doses. METHODS. A pelvic phantom comprising human bones in acrylic material mimicking soft tissue underwent PCD CT and EID CT at various tube potentials and radiation doses ranging from 0.05 to 5.00 mGy. Additional denoised reconstructions were generated using a commercial tool. Noise was measured in the acrylic material. Two readers performed independent qualitative assessments that entailed determining the denoised EID CT reconstruction with the lowest acceptable dose and then comparing this reference reconstruction with PCD CT reconstructions without and with denoising, using subjective Likert scales. RESULTS. Noise was lower for PCD CT than for EID CT. For instance, at 0.05 mGy and 100 kV with tin filter, noise was 38.4 HU for PCD CT versus 48.8 HU for EID CT. Denoising further reduced noise; for example, for PCD CT at 100 kV with tin filter at 0.25 mGy, noise was 19.9 HU without denoising versus 9.7 HU with denoising. For both readers, lowest acceptable dose for EID CT was 0.10 mGy (total score, 11 of 15 for both readers). Both readers somewhat agreed that PCD CT without denoising at 0.10 mGy (reflecting reference reconstruction dose) was relatively better than the reference reconstruction in terms of osseous structures, artifacts, and image quality. Both readers also somewhat agreed that denoised PCD CT reconstructions at 0.10 mGy and 0.05 mGy (reflecting matched and lower doses, respectively, with respect to reference reconstruction dose) were relatively better than the reference reconstruction for the image quality measures. CONCLUSION. PCD CT showed better-quality images than EID CT when performed at the lowest acceptable radiation dose for EID CT. PCD CT with denoising yielded better-quality images at a dose lower than lowest acceptable dose for EID CT. CLINICAL IMPACT. PCD CT with denoising could facilitate lower radiation doses for pelvic imaging.


Asunto(s)
Fotones , Estaño , Humanos , Tomografía Computarizada por Rayos X/métodos , Fantasmas de Imagen , Dosis de Radiación , Pelvis
20.
Network ; : 1-25, 2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-38989778

RESUMEN

Demosaicking is a popular scientific area that is being explored by a vast number of scientists. Current digital imaging technologies capture colour images with a single monochrome sensor. In addition, the colour images were captured using a sensor coupled with a Colour Filter Array (CFA). Furthermore, the demosaicking procedure is required to obtain a full-colour image. Image denoising and image demosaicking are the two important image restoration techniques, which have increased popularity in recent years. Finding a suitable strategy for multiple image restoration is critical for researchers. Hence, a deep learning (DL) based image denoising and image demosaicking is developed in this research. Moreover, the Autoregressive Circle Wave Optimization (ACWO) based Demosaicking Convolutional Neural Network (DMCNN) is designed for image demosaicking. The Quantum Wavelet Transform (QWT) is used in the image denoising process. Similarly, Quantum Wavelet Transform (QWT) is used to analyse the abrupt changes in the input image with noise. The transformed image is then subjected to a thresholding technique, which determines an appropriate threshold range. Once the threshold range has been determined, soft thresholding is applied to the resulting wavelet coefficients. After that, the extraction and reconstruction of the original image is carried out using the Inverse Quantum Wavelet Transform (IQWT). Finally, the fused image is created by combining the results of both processes using a weighted average. The denoised and demosaicked images are combined using the weighted average technique. Furthermore, the proposed QWT+DMCNN-ACWO model provided the ideal values of Peak signal-to-noise ratio (PSNR), Second derivative like measure of enhancement (SDME), Structural Similarity Index (SSIM), Figure of Merit (FOM) of 0.890, and computational time of 49.549 dB, 59.53 dB, 0.963, 0.890, and 0.571, respectively.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA