Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.594
Filtrar
1.
PLoS One ; 19(9): e0310904, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39321161

RESUMEN

In order to reduce the encoding complexity and stream size, improve the encoding performance and further improve the compression performance, the depth prediction partition encoding is studied in this paper. In terms of pattern selection strategy, optimization analysis is carried out based on fast strategic decision-making methods to ensure the comprehensiveness of data processing. In the design of adaptive strategies, different adaptive quantization parameter adjustment strategies are adopted for the equatorial and polar regions by considering the different levels of user attention in 360 degree virtual reality videos. The purpose is to achieve the optimal balance between distortion and stream size, thereby managing the output stream size while maintaining video quality. The results showed that this strategy achieved a maximum reduction of 2.92% in bit rate and an average reduction of 1.76%. The average coding time could be saved by 39.28%, and the average reconstruction quality was 0.043, with almost no quality loss detected by the audience. At the same time, the model demonstrated excellent performance in sequences of 4K, 6K, and 8K. The proposed deep partitioning adaptive strategy has significant improvements in video encoding quality and efficiency, which can improve encoding efficiency while ensuring video quality.


Asunto(s)
Algoritmos , Grabación en Video , Realidad Virtual , Grabación en Video/métodos , Humanos , Compresión de Datos/métodos , Procesamiento de Imagen Asistido por Computador/métodos
2.
PLoS One ; 19(9): e0308796, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39325757

RESUMEN

Loss-less data compression becomes the need of the hour for effective data compression and computation in VLSI test vector generation and testing in addition to hardware AI/ML computations. Golomb code is one of the effective technique for lossless data compression and it becomes valid only when the divisor can be expressed as power of two. This work aims to increase compression ratio by further encoding the unary part of the Golomb Rice (GR) code so as to decrease the amount of bits used, it mainly focuses on optimizing the hardware for encoding side. The algorithm was developed and coded in Verilog and simulated using Modelsim. This code was then synthesised in Cadence Encounter RTL Synthesiser. The modifications carried out show around 6% to 19% reduction in bits used for a linearly distributed data set. Worst-case delays have been reduced by 3% to 8%. Area reduction varies from 22% to 36% for different methods. Simulation for Power consumption shows nearly 7% reduction in switching power. This ideally suggest the usage of Golomb Rice coding technique for test vector compression and data computation for multiple data types, which should ideally have a geometrical distribution.


Asunto(s)
Algoritmos , Compresión de Datos , Compresión de Datos/métodos , Computadores , Simulación por Computador , Oryza
3.
Commun Biol ; 7(1): 1081, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39227646

RESUMEN

The surge in advanced imaging techniques has generated vast biomedical image data with diverse dimensions in space, time and spectrum, posing big challenges to conventional compression techniques in image storage, transmission, and sharing. Here, we propose an intelligent image compression approach with the first-proved semantic redundancy of biomedical data in the implicit neural function domain. This Semantic redundancy based Implicit Neural Compression guided with Saliency map (SINCS) can notably improve the compression efficiency for arbitrary-dimensional image data in terms of compression ratio and fidelity. Moreover, with weight transfer and residual entropy coding strategies, it shows improved compression speed while maintaining high quality. SINCS yields high quality compression with over 2000-fold compression ratio on 2D, 2D-T, 3D, 4D biomedical images of diverse targets ranging from single virus to entire human organs, and ensures reliable downstream tasks, such as object segmentation and quantitative analyses, to be conducted at high efficiency.


Asunto(s)
Compresión de Datos , Semántica , Compresión de Datos/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Algoritmos
4.
PLoS One ; 19(9): e0307619, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39264977

RESUMEN

Medical image security is paramount in the digital era but remains a significant challenge. This paper introduces an innovative zero-watermarking methodology tailored for medical imaging, ensuring robust protection without compromising image quality. We utilize Sped-up Robust features for high-precision feature extraction and singular value decomposition (SVD) to embed watermarks into the frequency domain, preserving the original image's integrity. Our methodology uniquely encodes watermarks in a non-intrusive manner, leveraging the robustness of the extracted features and the resilience of the SVD approach. The embedded watermark is imperceptible, maintaining the diagnostic value of medical images. Extensive experiments under various attacks, including Gaussian noise, JPEG compression, and geometric distortions, demonstrate the methodology's superior performance. The results reveal exceptional robustness, with high Normalized Correlation (NC) and Peak Signal-to-noise ratio (PSNR) values, outperforming existing techniques. Specifically, under Gaussian noise and rotation attacks, the watermark retrieved from the encrypted domain maintained an NC value close to 1.00, signifying near-perfect resilience. Even under severe attacks such as 30% cropping, the methodology exhibited a significantly higher NC compared to current state-of-the-art methods.


Asunto(s)
Algoritmos , Seguridad Computacional , Humanos , Diagnóstico por Imagen/métodos , Relación Señal-Ruido , Procesamiento de Imagen Asistido por Computador/métodos , Compresión de Datos/métodos
5.
Magn Reson Med ; 92(6): 2535-2545, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39129199

RESUMEN

PURPOSE: To implement rosette readout trajectories with compressed sensing reconstruction for fast and motion-robust CEST and magnetization transfer contrast imaging with inherent correction of B0 inhomogeneity. METHODS: A pulse sequence was developed for fast saturation transfer imaging using a stack of rosette trajectories with a higher sampling density near the k-space center. Each rosette lobe was segmented into two halves to generate dual-echo images. B0 inhomogeneities were estimated using the phase difference between the images and corrected subsequently. The rosette-based imaging was evaluated in comparison to a fully sampled Cartesian trajectory and demonstrated on CEST phantoms (creatine solutions and egg white) and healthy volunteers at 3 T. RESULTS: Compared with the conventional Cartesian acquisition, compressed sensing reconstructed rosette images provided image quality with overall higher contrast-to-noise ratio and significantly faster readout time. Accurate B0 map estimation was achieved from the rosette acquisition with a negligible bias of 0.01 Hz between the rosette and dual-echo Cartesian gradient echo B0 maps, using the latter as ground truth. The water-saturation spectra (Z-spectra) and amide proton transfer weighted signals obtained from the rosette-based sequence were well preserved compared with the fully sampled data, both in the phantom and human studies. CONCLUSIONS: Fast, motion-robust, and inherent B0-corrected CEST and magnetization transfer contrast imaging using rosette trajectories could improve subject comfort and compliance, contrast-to-noise ratio, and provide inherent B0 homogeneity information. This work is expected to significantly accelerate the translation of CEST-MRI into a robust, clinically viable approach.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Fantasmas de Imagen , Humanos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo/diagnóstico por imagen , Movimiento (Física) , Compresión de Datos/métodos , Voluntarios Sanos , Relación Señal-Ruido , Reproducibilidad de los Resultados , Interpretación de Imagen Asistida por Computador/métodos , Aumento de la Imagen/métodos
6.
Neural Netw ; 179: 106541, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39089153

RESUMEN

Compressed Sensing (CS) is a groundbreaking paradigm in image acquisition, challenging the constraints of the Nyquist-Shannon sampling theorem. This enables high-quality image reconstruction using a minimal number of measurements. Neural Networks' potent feature induction capabilities enable advanced data-driven CS methods to achieve high-fidelity image reconstruction. However, achieving satisfactory reconstruction performance, particularly in terms of perceptual quality, remains challenging at extremely low sampling rates. To tackle this challenge, we introduce a novel two-stage image CS framework based on latent diffusion, named LD-CSNet. In the first stage, we utilize an autoencoder pre-trained on a large dataset to represent natural images as low-dimensional latent vectors, establishing prior knowledge distinct from sparsity and effectively reducing the dimensionality of the solution space. In the second stage, we employ a conditional diffusion model for maximum likelihood estimates in the latent space. This is supported by a measurement embedding module designed to encode measurements, making them suitable for a denoising network. This guides the generation process in reconstructing low-dimensional latent vectors. Finally, the image is reconstructed using a pre-trained decoder. Experimental results across multiple public datasets demonstrate LD-CSNet's superior perceptual quality and robustness to noise. It maintains fidelity and visual quality at lower sampling rates. Research findings suggest the promising application of diffusion models in image CS. Future research can focus on developing more appropriate models for the first stage.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Compresión de Datos/métodos , Algoritmos , Difusión
7.
Artif Intell Med ; 156: 102948, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39173422

RESUMEN

Metagenomics is a rapidly expanding field that uses next-generation sequencing technology to analyze the genetic makeup of environmental samples. However, accurately identifying the organisms in a metagenomic sample can be complex, and traditional reference-based methods may need to be more effective in some instances. In this study, we present a novel approach for metagenomic identification, using data compressors as a feature for taxonomic classification. By evaluating a comprehensive set of compressors, including both general-purpose and genomic-specific, we demonstrate the effectiveness of this method in accurately identifying organisms in metagenomic samples. The results indicate that using features from multiple compressors can help identify taxonomy. An overall accuracy of 95% was achieved using this method using an imbalanced dataset with classes with limited samples. The study also showed that the correlation between compression and classification is insignificant, highlighting the need for a multi-faceted approach to metagenomic identification. This approach offers a significant advancement in the field of metagenomics, providing a reference-less method for taxonomic identification that is both effective and efficient while revealing insights into the statistical and algorithmic nature of genomic data. The code to validate this study is publicly available at https://github.com/ieeta-pt/xgTaxonomy.


Asunto(s)
Algoritmos , Compresión de Datos , Metagenómica , Metagenómica/métodos , Compresión de Datos/métodos , Secuenciación de Nucleótidos de Alto Rendimiento/métodos , Humanos
8.
Magn Reson Imaging ; 113: 110220, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39173963

RESUMEN

OBJECTIVES: Compressed sensing allows for image reconstruction from sparsely sampled k-space data, which is particularly useful in dynamic contrast enhanced MRI (DCE-MRI). The aim of the study was to assess the diagnostic value of a volume-interpolated 3D T1-weighted spoiled gradient-echo sequence with variable density Cartesian undersampling and compressed sensing (CS) for head and neck MRI. METHODS: Seventy-one patients with clinical indications for head and neck MRI were included in this study. DCE-MRI was performed at 3 Tesla magnet using CS-VIBE (variable density undersampling, temporal resolution 3.4 s, slice thickness 1 mm). Image quality was compared to standard Cartesian VIBE. Three experienced readers independently evaluated image quality and lesion conspicuity on a 5-point Likert scale and determined the DCE-derived time intensity curve (TIC) types. RESULTS: CS-VIBE demonstrated higher image quality scores compared to standard VIBE with respect to overall image quality (4.3 ± 0.6 vs. 4.2 ± 0.7, p = 0.682), vessel contour (4.6 ± 0.4 vs. 4.4 ± 0.6, p < 0.001), muscle contour (4.4 ± 0.5 vs. 4.5 ± 0.6, p = 0.302), lesion conspicuity (4.5 ± 0.7 vs. 4.3 ± 0.9, p = 0.024) and showed improved fat saturation (4.8 ± 0.3 vs. 3.8 ± 0.4, p < 0.001) and movement artifacts were significantly reduced (4.6 ± 0.6 vs. 3.7 ± 0.7, p < 0.001). Standard VIBE outperformed CS-VIBE in the delineation of pharyngeal mucosa (4.2 ± 0.5 vs. 4.6 ± 0.6, p < 0.001). Lesion size in cases where a focal lesion was identified was similar for all readers for CS-VIBE and standard VIBE (p = 0.101). TIC curve assessment showed good interobserver agreement (k=0.717). CONCLUSION: CS-VIBE with variable density Cartesian undersampling allows for DCE-MRI of the head and neck region with diagnostic, high image quality and high temporal resolution.


Asunto(s)
Medios de Contraste , Neoplasias de Cabeza y Cuello , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Femenino , Masculino , Persona de Mediana Edad , Anciano , Adulto , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Cuello/diagnóstico por imagen , Aumento de la Imagen/métodos , Anciano de 80 o más Años , Cabeza/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Reproducibilidad de los Resultados , Adulto Joven , Compresión de Datos/métodos , Algoritmos
9.
Neural Netw ; 179: 106555, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39068676

RESUMEN

Lossy image coding techniques usually result in various undesirable compression artifacts. Recently, deep convolutional neural networks have seen encouraging advances in compression artifact reduction. However, most of them focus on the restoration of the luma channel without considering the chroma components. Besides, most deep convolutional neural networks are hard to deploy in practical applications because of their high model complexity. In this article, we propose a dual-stage feedback network (DSFN) for lightweight color image compression artifact reduction. Specifically, we propose a novel curriculum learning strategy to drive a DSFN to reduce color image compression artifacts in a luma-to-RGB manner. In the first stage, the DSFN is dedicated to reconstructing the luma channel, whose high-level features containing rich structural information are then rerouted to the second stage by a feedback connection to guide the RGB image restoration. Furthermore, we present a novel enhanced feedback block for efficient high-level feature extraction, in which an adaptive iterative self-refinement module is carefully designed to refine the low-level features progressively, and an enhanced separable convolution is advanced to exploit multiscale image information fully. Extensive experiments show the notable advantage of our DSFN over several state-of-the-art methods in both quantitative indices and visual effects with lower model complexity.


Asunto(s)
Artefactos , Color , Compresión de Datos , Retroalimentación , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Compresión de Datos/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Humanos , Aprendizaje Profundo
10.
Magn Reson Med ; 92(6): 2696-2706, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39056341

RESUMEN

PURPOSE: This study proposes faster virtual observation point (VOP) compression as well as post-processing algorithms for specific absorption rate (SAR) matrix compression. Furthermore, it shows the relation between the number of channels and the computational burden for VOP-based SAR calculation. METHODS: The proposed new algorithms combine the respective benefits of two different criteria for determining upper boundedness of SAR matrices by the VOPs. Comparisons of the old and new algorithms are performed for head coil arrays with various channel counts. The new post-processing algorithm is used to post-process the VOP sets of nine arrays, and the number of VOPs for a fixed median relative overestimation is compared. RESULTS: The new algorithms are faster than the old algorithms by a factor of two to more than 10. The compression efficiency (number of VOPs relative to initial number of SAR matrices) is identical. For a fixed median relative overestimation, the number of VOPs increases logarithmically with the number of RF coil channels when post-processing is applied. CONCLUSION: The new algorithms are much faster than previous algorithms. Post-processing is very beneficial for online SAR supervision of MRI systems with high channel counts, since for a given number of VOPs the relative SAR overestimation can be lowered.


Asunto(s)
Algoritmos , Compresión de Datos , Imagen por Resonancia Magnética , Imagen por Resonancia Magnética/métodos , Compresión de Datos/métodos , Humanos , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos , Reproducibilidad de los Resultados , Encéfalo/diagnóstico por imagen
11.
Neural Netw ; 179: 106533, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39079378

RESUMEN

The increasing size of pre-trained language models has led to a growing interest in model compression. Pruning and distillation are the primary methods employed to compress these models. Existing pruning and distillation methods are effective in maintaining model accuracy and reducing its size. However, they come with limitations. For instance, pruning is often suboptimal and biased by transforming it into a continuous optimization problem. Distillation relies primarily on one-to-one layer mappings for knowledge transfer, which leads to underutilization of the rich knowledge in teacher. Therefore, we propose a method of joint pruning and distillation for automatic pruning of pre-trained language models. Specifically, we first propose Gradient Progressive Pruning (GPP), which achieves a smooth transition of indicator vector values from real to binary by progressively converging the values of unimportant units' indicator vectors to zero before the end of the search phase. This effectively overcomes the limitations of traditional pruning methods while supporting compression with higher sparsity. In addition, we propose the Dual Feature Distillation (DFD). DFD adaptively globally fuses teacher features and locally fuses student features, and then uses the dual features of global teacher features and local student features for knowledge distillation. This realizes a "preview-review" mechanism that can better extract useful information from multi-level teacher information and transfer it to student. Comparative experiments on the GLUE benchmark dataset and ablation experiments indicate that our method outperforms other state-of-the-art methods.


Asunto(s)
Redes Neurales de la Computación , Compresión de Datos/métodos , Algoritmos , Humanos
12.
Hum Brain Mapp ; 45(11): e26795, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39045881

RESUMEN

The architecture of the brain is too complex to be intuitively surveyable without the use of compressed representations that project its variation into a compact, navigable space. The task is especially challenging with high-dimensional data, such as gene expression, where the joint complexity of anatomical and transcriptional patterns demands maximum compression. The established practice is to use standard principal component analysis (PCA), whose computational felicity is offset by limited expressivity, especially at great compression ratios. Employing whole-brain, voxel-wise Allen Brain Atlas transcription data, here we systematically compare compressed representations based on the most widely supported linear and non-linear methods-PCA, kernel PCA, non-negative matrix factorisation (NMF), t-stochastic neighbour embedding (t-SNE), uniform manifold approximation and projection (UMAP), and deep auto-encoding-quantifying reconstruction fidelity, anatomical coherence, and predictive utility across signalling, microstructural, and metabolic targets, drawn from large-scale open-source MRI and PET data. We show that deep auto-encoders yield superior representations across all metrics of performance and target domains, supporting their use as the reference standard for representing transcription patterns in the human brain.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Transcripción Genética , Humanos , Encéfalo/diagnóstico por imagen , Encéfalo/metabolismo , Transcripción Genética/fisiología , Tomografía de Emisión de Positrones , Procesamiento de Imagen Asistido por Computador/métodos , Análisis de Componente Principal , Compresión de Datos/métodos , Atlas como Asunto
13.
Proc Natl Acad Sci U S A ; 121(28): e2320870121, 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38959033

RESUMEN

Efficient storage and sharing of massive biomedical data would open up their wide accessibility to different institutions and disciplines. However, compressors tailored for natural photos/videos are rapidly limited for biomedical data, while emerging deep learning-based methods demand huge training data and are difficult to generalize. Here, we propose to conduct Biomedical data compRession with Implicit nEural Function (BRIEF) by representing the target data with compact neural networks, which are data specific and thus have no generalization issues. Benefiting from the strong representation capability of implicit neural function, BRIEF achieves 2[Formula: see text]3 orders of magnitude compression on diverse biomedical data at significantly higher fidelity than existing techniques. Besides, BRIEF is of consistent performance across the whole data volume, and supports customized spatially varying fidelity. BRIEF's multifold advantageous features also serve reliable downstream tasks at low bandwidth. Our approach will facilitate low-bandwidth data sharing and promote collaboration and progress in the biomedical field.


Asunto(s)
Difusión de la Información , Redes Neurales de la Computación , Humanos , Difusión de la Información/métodos , Compresión de Datos/métodos , Aprendizaje Profundo , Investigación Biomédica/métodos
14.
Gigascience ; 132024 01 02.
Artículo en Inglés | MEDLINE | ID: mdl-39028587

RESUMEN

BACKGROUND: With the rise of large-scale genome sequencing projects, genotyping of thousands of samples has produced immense variant call format (VCF) files. It is becoming increasingly challenging to store, transfer, and analyze these voluminous files. Compression methods have been used to tackle these issues, aiming for both high compression ratio and fast random access. However, existing methods have not yet achieved a satisfactory compromise between these 2 objectives. FINDINGS: To address the aforementioned issue, we introduce GSC (Genotype Sparse Compression), a specialized and refined lossless compression tool for VCF files. In benchmark tests conducted across various open-source datasets, GSC showcased exceptional performance in genotype data compression. Compared with the industry's most advanced tools (namely, GBC and GTC), GSC achieved compression ratios that were higher by 26.9% to 82.4% over GBC and GTC on the datasets, respectively. In lossless compression scenarios, GSC also demonstrated robust performance, with compression ratios 1.5× to 6.5× greater than general-purpose tools like gzip, zstd, and BCFtools-a mode not supported by either GBC or GTC. Achieving such high compression ratios did require some reasonable trade-offs, including longer decompression times, with GSC being 1.2× to 2× slower than GBC, yet 1.1× to 1.4× faster than GTC. Moreover, GSC maintained decompression query speeds that were equivalent to its competitors. In terms of RAM usage, GSC outperformed both counterparts. Overall, GSC's comprehensive performance surpasses that of the most advanced technologies. CONCLUSION: GSC balances high compression ratios with rapid data access, enhancing genomic data management. It supports seamless PLINK binary format conversion, simplifying downstream analysis.


Asunto(s)
Compresión de Datos , Programas Informáticos , Compresión de Datos/métodos , Humanos , Genotipo , Biología Computacional/métodos , Algoritmos , Secuenciación de Nucleótidos de Alto Rendimiento/métodos
15.
Sci Rep ; 14(1): 17162, 2024 07 26.
Artículo en Inglés | MEDLINE | ID: mdl-39060441

RESUMEN

Cardiac monitoring systems in Internet of Things (IoT) healthcare, reliant on limited battery and computational capacity, need efficient local processing and wireless transmission for comprehensive analysis. Due to the power-intensive wireless transmission in IoT devices, ECG signal compression is essential to minimize data transfer. This paper presents a real-time, low-complexity algorithm for compressing electrocardiogram (ECG) signals. The algorithm uses just nine arithmetic operations per ECG sample point, generating a hybrid Pulse Width Modulation (PWM) signal storable in a compact 4-bit resolution format. Despite its simplicity, it performs comparably to existing methods in terms of Percentage Root-Mean-Square Difference (PRD) and space-saving while significantly reducing complexity and maintaining robustness against signal noise. It achieves an average Bit Compression Ratio (BCR) of 4 and space savings of 90.4% for ECG signals in the MIT-BIH database, with a PRD of 0.33% and a Quality Score (QS) of 12. The reconstructed signal shows no adverse effects on QRS complex detection and heart rate variability, preserving both the signal amplitude and periodicity. This efficient method for transferring ECG data from wearable devices enables real-time cardiac activity monitoring with reduced data storage requirements. Its versatility suggests potential broader applications, extending to compression of various signal types beyond ECG.


Asunto(s)
Algoritmos , Compresión de Datos , Electrocardiografía , Procesamiento de Señales Asistido por Computador , Electrocardiografía/métodos , Electrocardiografía/instrumentación , Humanos , Compresión de Datos/métodos , Frecuencia Cardíaca/fisiología , Monitoreo Fisiológico/métodos , Monitoreo Fisiológico/instrumentación
16.
Phys Med ; 124: 104491, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39079308

RESUMEN

BACKGROUND: Optimization of the dose the patient receives during scanning is an important problem in modern medical X-ray computed tomography (CT). One of the basic ways to its solution is to reduce the number of views. Compressed sensing theory helped promote the development of a new class of effective reconstruction algorithms for limited data CT. These compressed-sensing-inspired (CSI) algorithms optimize the Lp (0 ≤ p ≤ 1) norm of images and can accurately reconstruct CT tomograms from a very few views. The paper presents a review of the CSI algorithms and discusses prospects for their further use in commercial low-dose CT. METHODS: Many literature references with the CSI algorithms have been were searched. To structure the material collected the author gives a classification framework within which he describes Lp regularization methods, the basic CSI algorithms that are used most often in few-view CT, and some of their derivatives. Lots of examples are provided to illustrate the use of the CSI algorithms in few-view and low-dose CT. RESULTS: A list of the CSI algorithms is compiled from the literature search. For better demonstrativeness they are summarized in a table. The inference is done that already today some of the algorithms are capable of reconstruction from 20 to 30 views with acceptable quality and dose reduction by a factor of 10. DISCUSSION: In conclusion the author discusses how soon the CSI reconstruction algorithms can be introduced in the practice of medical diagnosis and used in commercial CT scanners.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Dosis de Radiación , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Compresión de Datos/métodos
17.
Bioinformatics ; 40(7)2024 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-38984796

RESUMEN

MOTIVATION: The introduction of Deep Minds' Alpha Fold 2 enabled the prediction of protein structures at an unprecedented scale. AlphaFold Protein Structure Database and ESM Metagenomic Atlas contain hundreds of millions of structures stored in CIF and/or PDB formats. When compressed with a general-purpose utility like gzip, this translates to tens of terabytes of data, which hinders the effective use of predicted structures in large-scale analyses. RESULTS: Here, we present ProteStAr, a compressor dedicated to CIF/PDB, as well as supplementary PAE files. Its main contribution is a novel approach to predicting atom coordinates on the basis of the previously analyzed atoms. This allows efficient encoding of the coordinates, the largest component of the protein structure files. The compression is lossless by default, though the lossy mode with a controlled maximum error of coordinates reconstruction is also present. Compared to the competing packages, i.e. BinaryCIF, Foldcomp, PDC, our approach offers a superior compression ratio at established reconstruction accuracy. By the efficient use of threads at both compression and decompression stages, the algorithm takes advantage of the multicore architecture of current central processing units and operates with speeds of about 1 GB/s. The presence of Python and C++ API further increases the usability of the presented method. AVAILABILITY AND IMPLEMENTATION: The source code of ProteStAr is available at https://github.com/refresh-bio/protestar.


Asunto(s)
Algoritmos , Bases de Datos de Proteínas , Proteínas , Programas Informáticos , Proteínas/química , Conformación Proteica , Compresión de Datos/métodos , Biología Computacional/métodos
18.
Magn Reson Med ; 92(4): 1363-1375, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38860514

RESUMEN

PURPOSE: Hyperpolarized 129Xe MRI benefits from non-Cartesian acquisitions that sample k-space efficiently and rapidly. However, their reconstructions are complex and burdened by decay processes unique to hyperpolarized gas. Currently used gridded reconstructions are prone to artifacts caused by magnetization decay and are ill-suited for undersampling. We present a compressed sensing (CS) reconstruction approach that incorporates magnetization decay in the forward model, thereby producing images with increased sharpness and contrast, even in undersampled data. METHODS: Radio-frequency, T1, and T 2 * $$ {\mathrm{T}}_2^{\ast } $$ decay processes were incorporated into the forward model and solved using iterative methods including CS. The decay-modeled reconstruction was validated in simulations and then tested in 2D/3D-spiral ventilation and 3D-radial gas-exchange MRI. Quantitative metrics including apparent-SNR and sharpness were compared between gridded, CS, and twofold undersampled CS reconstructions. Observations were validated in gas-exchange data collected from 15 healthy and 25 post-hematopoietic-stem-cell-transplant participants. RESULTS: CS reconstructions in simulations yielded images with threefold increases in accuracy. CS increased sharpness and contrast for ventilation in vivo imaging and showed greater accuracy for undersampled acquisitions. CS improved gas-exchange imaging, particularly in the dissolved-phase where apparent-SNR improved, and structure was made discernable. Finally, CS showed repeatability in important global gas-exchange metrics including median dissolved-gas signal ratio and median angle between real/imaginary components. CONCLUSION: A non-Cartesian CS reconstruction approach that incorporates hyperpolarized 129Xe decay processes is presented. This approach enables improved image sharpness, contrast, and overall image quality in addition to up-to threefold undersampling. This contribution benefits all hyperpolarized gas MRI through improved accuracy and decreased scan durations.


Asunto(s)
Algoritmos , Simulación por Computador , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Isótopos de Xenón , Imagen por Resonancia Magnética/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Masculino , Relación Señal-Ruido , Femenino , Imagenología Tridimensional/métodos , Adulto , Fantasmas de Imagen , Artefactos , Compresión de Datos/métodos , Reproducibilidad de los Resultados , Pulmón/diagnóstico por imagen , Medios de Contraste/química
19.
Neural Netw ; 178: 106411, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38906056

RESUMEN

Advancements in Neural Networks have led to larger models, challenging implementation on embedded devices with memory, battery, and computational constraints. Consequently, network compression has flourished, offering solutions to reduce operations and parameters. However, many methods rely on heuristics, often requiring re-training for accuracy. Model reduction techniques extend beyond Neural Networks, relevant in Verification and Performance Evaluation fields. This paper bridges widely-used reduction strategies with formal concepts like lumpability, designed for analyzing Markov Chains. We propose a pruning approach based on lumpability, preserving exact behavioral outcomes without data dependence or fine-tuning. Relaxing strict quotienting method definitions enables a formal understanding of common reduction techniques.


Asunto(s)
Cadenas de Markov , Redes Neurales de la Computación , Algoritmos , Humanos , Compresión de Datos/métodos
20.
Commun Biol ; 7(1): 553, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38724695

RESUMEN

For the last two decades, the amount of genomic data produced by scientific and medical applications has been growing at a rapid pace. To enable software solutions that analyze, process, and transmit these data in an efficient and interoperable way, ISO and IEC released the first version of the compression standard MPEG-G in 2019. However, non-proprietary implementations of the standard are not openly available so far, limiting fair scientific assessment of the standard and, therefore, hindering its broad adoption. In this paper, we present Genie, to the best of our knowledge the first open-source encoder that compresses genomic data according to the MPEG-G standard. We demonstrate that Genie reaches state-of-the-art compression ratios while offering interoperability with any other standard-compliant decoder independent from its manufacturer. Finally, the ISO/IEC ecosystem ensures the long-term sustainability and decodability of the compressed data through the ISO/IEC-supported reference decoder.


Asunto(s)
Compresión de Datos , Genómica , Programas Informáticos , Genómica/métodos , Compresión de Datos/métodos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA