Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.525
Filtrar
1.
Commun Biol ; 7(1): 553, 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38724695

RESUMO

For the last two decades, the amount of genomic data produced by scientific and medical applications has been growing at a rapid pace. To enable software solutions that analyze, process, and transmit these data in an efficient and interoperable way, ISO and IEC released the first version of the compression standard MPEG-G in 2019. However, non-proprietary implementations of the standard are not openly available so far, limiting fair scientific assessment of the standard and, therefore, hindering its broad adoption. In this paper, we present Genie, to the best of our knowledge the first open-source encoder that compresses genomic data according to the MPEG-G standard. We demonstrate that Genie reaches state-of-the-art compression ratios while offering interoperability with any other standard-compliant decoder independent from its manufacturer. Finally, the ISO/IEC ecosystem ensures the long-term sustainability and decodability of the compressed data through the ISO/IEC-supported reference decoder.


Assuntos
Compressão de Dados , Genômica , Software , Genômica/métodos , Compressão de Dados/métodos , Humanos
2.
Sci Rep ; 14(1): 10560, 2024 05 08.
Artigo em Inglês | MEDLINE | ID: mdl-38720020

RESUMO

The research on video analytics especially in the area of human behavior recognition has become increasingly popular recently. It is widely applied in virtual reality, video surveillance, and video retrieval. With the advancement of deep learning algorithms and computer hardware, the conventional two-dimensional convolution technique for training video models has been replaced by three-dimensional convolution, which enables the extraction of spatio-temporal features. Specifically, the use of 3D convolution in human behavior recognition has been the subject of growing interest. However, the increased dimensionality has led to challenges such as the dramatic increase in the number of parameters, increased time complexity, and a strong dependence on GPUs for effective spatio-temporal feature extraction. The training speed can be considerably slow without the support of powerful GPU hardware. To address these issues, this study proposes an Adaptive Time Compression (ATC) module. Functioning as an independent component, ATC can be seamlessly integrated into existing architectures and achieves data compression by eliminating redundant frames within video data. The ATC module effectively reduces GPU computing load and time complexity with negligible loss of accuracy, thereby facilitating real-time human behavior recognition.


Assuntos
Algoritmos , Compressão de Dados , Gravação em Vídeo , Humanos , Compressão de Dados/métodos , Atividades Humanas , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos
3.
Sensors (Basel) ; 24(7)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38610365

RESUMO

High-quality cardiopulmonary resuscitation (CPR) and training are important for successful revival during out-of-hospital cardiac arrest (OHCA). However, existing training faces challenges in quantifying each aspect. This study aimed to explore the possibility of using a three-dimensional motion capture system to accurately and effectively assess CPR operations, particularly about the non-quantified arm postures, and analyze the relationship among them to guide students to improve their performance. We used a motion capture system (Mars series, Nokov, China) to collect compression data about five cycles, recording dynamic data of each marker point in three-dimensional space following time and calculating depth and arm angles. Most unstably deviated to some extent from the standard, especially for the untrained students. Five data sets for each parameter per individual all revealed statistically significant differences (p < 0.05). The correlation between Angle 1' and Angle 2' for trained (rs = 0.203, p < 0.05) and untrained students (rs = -0.581, p < 0.01) showed a difference. Their performance still needed improvement. When conducting assessments, we should focus on not only the overall performance but also each compression. This study provides a new perspective for quantifying compression parameters, and future efforts should continue to incorporate new parameters and analyze the relationship among them.


Assuntos
Reanimação Cardiopulmonar , Compressão de Dados , Humanos , Estudos de Viabilidade , Captura de Movimento , China
4.
PLoS One ; 19(4): e0301622, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38630695

RESUMO

This paper proposes a reinforced concrete (RC) boundary beam-wall system that requires less construction material and a smaller floor height compared to the conventional RC transfer girder system. The structural performance of this system subjected to axial compression was evaluated by performing a structural test on four specimens of 1/2 scale. In addition, three-dimensional nonlinear finite element analysis was also performed to verify the effectiveness of the boundary beam-wall system. Three test parameters such as the lower wall length-to-upper wall length ratio, lower wall thickness, and stirrup details of the lower wall were considered. The load-displacement curve was plotted for each specimen and its failure mode was identified. The test results showed that decrease in the lower wall length-to-upper wall length ratio significantly reduced the peak strength of the boundary beam-wall system and difference in upper and lower wall thicknesses resulted in lateral bending caused by eccentricity in the out-of-plane direction. Additionally, incorporating cross-ties and reducing stirrup spacing in the lower wall significantly improved initial stiffness and peak strength, effectively minimizing stress concentration.


Assuntos
Materiais de Construção , Compressão de Dados , Análise de Elementos Finitos , Fenômenos Físicos
5.
J Acoust Soc Am ; 155(4): 2589-2602, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38607268

RESUMO

The processing and perception of amplitude modulation (AM) in the auditory system reflect a frequency-selective process, often described as a modulation filterbank. Previous studies on perceptual AM masking reported similar results for older listeners with hearing impairment (HI listeners) and young listeners with normal hearing (NH listeners), suggesting no effects of age or hearing loss on AM frequency selectivity. However, recent evidence has shown that age, independently of hearing loss, adversely affects AM frequency selectivity. Hence, this study aimed to disentangle the effects of hearing loss and age. A simultaneous AM masking paradigm was employed, using a sinusoidal carrier at 2.8 kHz, narrowband noise modulation maskers, and target modulation frequencies of 4, 16, 64, and 128 Hz. The results obtained from young (n = 3, 24-30 years of age) and older (n = 10, 63-77 years of age) HI listeners were compared to previously obtained data from young and older NH listeners. Notably, the HI listeners generally exhibited lower (unmasked) AM detection thresholds and greater AM frequency selectivity than their NH counterparts in both age groups. Overall, the results suggest that age negatively affects AM frequency selectivity for both NH and HI listeners, whereas hearing loss improves AM detection and AM selectivity, likely due to the loss of peripheral compression.


Assuntos
Compressão de Dados , Surdez , Perda Auditiva , Humanos , Mascaramento Perceptivo
6.
PLoS One ; 19(4): e0288296, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38557995

RESUMO

Network traffic prediction is an important network monitoring method, which is widely used in network resource optimization and anomaly detection. However, with the increasing scale of networks and the rapid development of 5-th generation mobile networks (5G), traditional traffic forecasting methods are no longer applicable. To solve this problem, this paper applies Long Short-Term Memory (LSTM) network, data augmentation, clustering algorithm, model compression, and other technologies, and proposes a Cluster-based Lightweight PREdiction Model (CLPREM), a method for real-time traffic prediction of 5G mobile networks. We have designed unique data processing and classification methods to make CLPREM more robust than traditional neural network models. To demonstrate the effectiveness of the method, we designed and conducted experiments in a variety of settings. Experimental results confirm that CLPREM can obtain higher accuracy than traditional prediction schemes with less time cost. To address the occasional anomaly prediction issue in CLPREM, we propose a preprocessing method that minimally impacts time overhead. This approach not only enhances the accuracy of CLPREM but also effectively resolves the real-time traffic prediction challenge in 5G mobile networks.


Assuntos
Compressão de Dados , Redes Neurais de Computação , Algoritmos , Previsões
7.
Sci Rep ; 14(1): 7650, 2024 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-38561346

RESUMO

This study presents an advanced metaheuristic approach termed the Enhanced Gorilla Troops Optimizer (EGTO), which builds upon the Marine Predators Algorithm (MPA) to enhance the search capabilities of the Gorilla Troops Optimizer (GTO). Like numerous other metaheuristic algorithms, the GTO encounters difficulties in preserving convergence accuracy and stability, notably when tackling intricate and adaptable optimization problems, especially when compared to more advanced optimization techniques. Addressing these challenges and aiming for improved performance, this paper proposes the EGTO, integrating high and low-velocity ratios inspired by the MPA. The EGTO technique effectively balances exploration and exploitation phases, achieving impressive results by utilizing fewer parameters and operations. Evaluation on a diverse array of benchmark functions, comprising 23 established functions and ten complex ones from the CEC2019 benchmark, highlights its performance. Comparative analysis against established optimization techniques reveals EGTO's superiority, consistently outperforming its counterparts such as tuna swarm optimization, grey wolf optimizer, gradient based optimizer, artificial rabbits optimization algorithm, pelican optimization algorithm, Runge Kutta optimization algorithm (RUN), and original GTO algorithms across various test functions. Furthermore, EGTO's efficacy extends to addressing seven challenging engineering design problems, encompassing three-bar truss design, compression spring design, pressure vessel design, cantilever beam design, welded beam design, speed reducer design, and gear train design. The results showcase EGTO's robust convergence rate, its adeptness in locating local/global optima, and its supremacy over alternative methodologies explored.


Assuntos
Nativos do Alasca , Compressão de Dados , Lagomorpha , Animais , Humanos , Coelhos , Gorilla gorilla , Algoritmos , Benchmarking
8.
J Biomed Opt ; 29(Suppl 1): S11529, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38650979

RESUMO

Significance: Compressed sensing (CS) uses special measurement designs combined with powerful mathematical algorithms to reduce the amount of data to be collected while maintaining image quality. This is relevant to almost any imaging modality, and in this paper we focus on CS in photoacoustic projection imaging (PAPI) with integrating line detectors (ILDs). Aim: Our previous research involved rather general CS measurements, where each ILD can contribute to any measurement. In the real world, however, the design of CS measurements is subject to practical constraints. In this research, we aim at a CS-PAPI system where each measurement involves only a subset of ILDs, and which can be implemented in a cost-effective manner. Approach: We extend the existing PAPI with a self-developed CS unit. The system provides structured CS matrices for which the existing recovery theory cannot be applied directly. A random search strategy is applied to select the CS measurement matrix within this class for which we obtain exact sparse recovery. Results: We implement a CS PAPI system for a compression factor of 4:3, where specific measurements are made on separate groups of 16 ILDs. We algorithmically design optimal CS measurements that have proven sparse CS capabilities. Numerical experiments are used to support our results. Conclusions: CS with proven sparse recovery capabilities can be integrated into PAPI, and numerical results support this setup. Future work will focus on applying it to experimental data and utilizing data-driven approaches to enhance the compression factor and generalize the signal class.


Assuntos
Algoritmos , Desenho de Equipamento , Processamento de Imagem Assistida por Computador , Técnicas Fotoacústicas , Técnicas Fotoacústicas/métodos , Técnicas Fotoacústicas/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Compressão de Dados/métodos , Imagens de Fantasmas
9.
Genome Biol ; 25(1): 106, 2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38664753

RESUMO

Centrifuger is an efficient taxonomic classification method that compares sequencing reads against a microbial genome database. In Centrifuger, the Burrows-Wheeler transformed genome sequences are losslessly compressed using a novel scheme called run-block compression. Run-block compression achieves sublinear space complexity and is effective at compressing diverse microbial databases like RefSeq while supporting fast rank queries. Combining this compression method with other strategies for compacting the Ferragina-Manzini (FM) index, Centrifuger reduces the memory footprint by half compared to other FM-index-based approaches. Furthermore, the lossless compression and the unconstrained match length help Centrifuger achieve greater accuracy than competing methods at lower taxonomic levels.


Assuntos
Compressão de Dados , Metagenômica , Compressão de Dados/métodos , Metagenômica/métodos , Software , Genoma Microbiano , Genoma Bacteriano , Análise de Sequência de DNA/métodos
10.
J Proteome Res ; 23(5): 1702-1712, 2024 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-38640356

RESUMO

Several lossy compressors have achieved superior compression rates for mass spectrometry (MS) data at the cost of storage precision. Currently, the impacts of precision losses on MS data processing have not been thoroughly evaluated, which is critical for the future development of lossy compressors. We first evaluated different storage precision (32 bit and 64 bit) in lossless mzML files. We then applied 10 truncation transformations to generate precision-lossy files: five relative errors for intensities and five absolute errors for m/z values. MZmine3 and XCMS were used for feature detection and GNPS for compound annotation. Lastly, we compared Precision, Recall, F1 - score, and file sizes between lossy files and lossless files under different conditions. Overall, we revealed that the discrepancy between 32 and 64 bit precision was under 1%. We proposed an absolute m/z error of 10-4 and a relative intensity error of 2 × 10-2, adhering to a 5% error threshold (F1 - scores above 95%). For a stricter 1% error threshold (F1 - scores above 99%), an absolute m/z error of 2 × 10-5 and a relative intensity error of 2 × 10-3 were advised. This guidance aims to help researchers improve lossy compression algorithms and minimize the negative effects of precision losses on downstream data processing.


Assuntos
Compressão de Dados , Espectrometria de Massas , Metabolômica , Espectrometria de Massas/métodos , Metabolômica/métodos , Metabolômica/estatística & dados numéricos , Compressão de Dados/métodos , Software , Humanos , Algoritmos
11.
IEEE Trans Image Process ; 33: 2502-2513, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38526904

RESUMO

Residual coding has gained prevalence in lossless compression, where a lossy layer is initially employed and the reconstruction errors (i.e., residues) are then losslessly compressed. The underlying principle of the residual coding revolves around the exploration of priors based on context modeling. Herein, we propose a residual coding framework for 3D medical images, involving the off-the-shelf video codec as the lossy layer and a Bilateral Context Modeling based Network (BCM-Net) as the residual layer. The BCM-Net is proposed to achieve efficient lossless compression of residues through exploring intra-slice and inter-slice bilateral contexts. In particular, a symmetry-based intra-slice context extraction (SICE) module is proposed to mine bilateral intra-slice correlations rooted in the inherent anatomical symmetry of 3D medical images. Moreover, a bi-directional inter-slice context extraction (BICE) module is designed to explore bilateral inter-slice correlations from bi-directional references, thereby yielding representative inter-slice context. Experiments on popular 3D medical image datasets demonstrate that the proposed method can outperform existing state-of-the-art methods owing to efficient redundancy reduction. Our code will be available on GitHub for future research.


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Imageamento Tridimensional/métodos
12.
Sci Rep ; 14(1): 6209, 2024 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-38485967

RESUMO

Efficient and rapid auxiliary diagnosis of different grades of lung adenocarcinoma is conducive to helping doctors accelerate individualized diagnosis and treatment processes, thus improving patient prognosis. Currently, there is often a problem of large intra-class differences and small inter-class differences between pathological images of lung adenocarcinoma tissues under different grades. If attention mechanisms such as Coordinate Attention (CA) are directly used for lung adenocarcinoma grading tasks, it is prone to excessive compression of feature information and overlooking the issue of information dependency within the same dimension. Therefore, we propose a Dimension Information Embedding Attention Network (DIEANet) for the task of lung adenocarcinoma grading. Specifically, we combine different pooling methods to automatically select local regions of key growth patterns such as lung adenocarcinoma cells, enhancing the model's focus on local information. Additionally, we employ an interactive fusion approach to concentrate feature information within the same dimension and across dimensions, thereby improving model performance. Extensive experiments have shown that under the condition of maintaining equal computational expenses, the accuracy of DIEANet with ResNet34 as the backbone reaches 88.19%, with an AUC of 96.61%, MCC of 81.71%, and Kappa of 81.16%. Compared to seven other attention mechanisms, it achieves state-of-the-art objective metrics. Additionally, it aligns more closely with the visual attention of pathology experts under subjective visual assessment.


Assuntos
Adenocarcinoma de Pulmão , Adenocarcinoma , Compressão de Dados , Neoplasias Pulmonares , Humanos , Benchmarking , Neoplasias Pulmonares/diagnóstico
13.
Nat Commun ; 15(1): 2376, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38491032

RESUMO

Despite the growing interest of archiving information in synthetic DNA to confront data explosion, quantitatively querying the data stored in DNA is still a challenge. Herein, we present Search Enabled by Enzymatic Keyword Recognition (SEEKER), which utilizes CRISPR-Cas12a to rapidly generate visible fluorescence when a DNA target corresponding to the keyword of interest is present. SEEKER achieves quantitative text searching since the growth rate of fluorescence intensity is proportional to keyword frequency. Compatible with SEEKER, we develop non-collision grouping coding, which reduces the size of dictionary and enables lossless compression without disrupting the original order of texts. Using four queries, we correctly identify keywords in 40 files with a background of ~8000 irrelevant terms. Parallel searching with SEEKER can be performed on a 3D-printed microfluidic chip. Overall, SEEKER provides a quantitative approach to conducting parallel searching over the complete content stored in DNA with simple implementation and rapid result generation.


Assuntos
Compressão de Dados , Ferramenta de Busca
14.
Neural Netw ; 174: 106250, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38531122

RESUMO

Snapshot compressive hyperspectral imaging necessitates the reconstruction of a complete hyperspectral image from its compressive snapshot measurement, presenting a challenging inverse problem. This paper proposes an enhanced deep unrolling neural network, called EDUNet, to tackle this problem. The EDUNet is constructed via the deep unrolling of a proximal gradient descent algorithm and introduces two innovative modules for gradient-driven update and proximal mapping reflectivity. The gradient-driven update module leverages a memory-assistant descent approach inspired by momentum-based acceleration techniques, for enhancing the unrolled reconstruction process and improving convergence. The proximal mapping is modeled by a sub-network with a cross-stage spectral self-attention, which effectively exploits the inherent self-similarities present in hyperspectral images along the spectral axis. It also enhances feature flow throughout the network, contributing to reconstruction performance gain. Furthermore, we introduce a spectral geometry consistency loss, encouraging EDUNet to prioritize the geometric layouts of spectral curves, leading to a more precise capture of spectral information in hyperspectral images. Experiments are conducted using three benchmark datasets including KAIST, ICVL, and Harvard, along with some real data, comprising a total of 73 samples. The experimental results demonstrate that EDUNet outperforms 15 competing models across four metrics including PSNR, SSIM, SAM, and ERGAS.


Assuntos
Compressão de Dados , Imageamento Hiperespectral , Fenômenos Físicos , Algoritmos , Movimento (Física)
15.
Neural Netw ; 174: 106220, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38447427

RESUMO

Structured pruning is a representative model compression technology for convolutional neural networks (CNNs), aiming to prune some less important filters or channels of CNNs. Most recent structured pruning methods have established some criteria to measure the importance of filters, which are mainly based on the magnitude of weights or other parameters in CNNs. However, these judgment criteria lack explainability, and it is insufficient to simply rely on the numerical values of the network parameters to assess the relationship between the channel and the model performance. Moreover, directly utilizing these pruning criteria for global pruning may lead to suboptimal solutions, therefore, it is necessary to complement search algorithms to determine the pruning ratio for each layer. To address these issues, we propose ARPruning (Attention-map-based Ranking Pruning), which reconstructs a new pruning criterion as the importance of the intra-layer channels and further develops a new local neighborhood search algorithm for determining the optimal inter-layer pruning ratio. To measure the relationship between the channel to be pruned and the model performance, we construct an intra-layer channel importance criterion by considering the attention map for each layer. Then, we propose an automatic pruning strategy searching method that can search for the optimal solution effectively and efficiently. By integrating the well-designed pruning criteria and search strategy, our ARPruning can not only maintain a high compression rate but also achieve outstanding accuracy. In our work, it is also experimentally concluded that compared with state-of-the-art pruning methods, our ARPruning method is capable of achieving better compression results. The code can be obtained at https://github.com/dozingLee/ARPruning.


Assuntos
Algoritmos , Compressão de Dados , Redes Neurais de Computação
16.
Sci Rep ; 14(1): 5087, 2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38429300

RESUMO

When traditional EEG signals are collected based on the Nyquist theorem, long-time recordings of EEG signals will produce a large amount of data. At the same time, limited bandwidth, end-to-end delay, and memory space will bring great pressure on the effective transmission of data. The birth of compressed sensing alleviates this transmission pressure. However, using an iterative compressed sensing reconstruction algorithm for EEG signal reconstruction faces complex calculation problems and slow data processing speed, limiting the application of compressed sensing in EEG signal rapid monitoring systems. As such, this paper presents a non-iterative and fast algorithm for reconstructing EEG signals using compressed sensing and deep learning techniques. This algorithm uses the improved residual network model, extracts the feature information of the EEG signal by one-dimensional dilated convolution, directly learns the nonlinear mapping relationship between the measured value and the original signal, and can quickly and accurately reconstruct the EEG signal. The method proposed in this paper has been verified by simulation on the open BCI contest dataset. Overall, it is proved that the proposed method has higher reconstruction accuracy and faster reconstruction speed than the traditional CS reconstruction algorithm and the existing deep learning reconstruction algorithm. In addition, it can realize the rapid reconstruction of EEG signals.


Assuntos
Compressão de Dados , Aprendizado Profundo , Processamento de Sinais Assistido por Computador , Compressão de Dados/métodos , Algoritmos , Eletroencefalografia/métodos
17.
PLoS One ; 19(3): e0297154, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38446783

RESUMO

This study introduces a novel concrete-filled tube (CFT) column system featuring a steel tube comprised of four internal triangular units. The incorporation of these internal triangular units serves to reduce the width-thickness ratio of the steel tube and augment the effective confinement area of the infilled concrete. This design enhancement is anticipated to result in improved structural strength and ductility, contributing to enhanced overall performance and sustainability. To assess the effectiveness of the newly proposed column system, a full-scale test was conducted on five square steel tube column specimens subjected to axial compression. Among these specimens, two adhered to the conventional steel tube column design, while the remaining three featured the new CFT columns with internal triangular units. The shape of the CFT column, the presence of infilled concrete and the presence of openings on the ITUs were considered as test parameters. The test results reveal that the ductility of the newly proposed CFT column system exhibited a minimum 30% improvement compared to the conventional CFT column. In addition, the initial stiffness and axial compressive strength of the new system were found to be comparable to those of the conventional CFT column.


Assuntos
Compressão de Dados , Força Compressiva , Fenômenos Físicos , Aço , Resistência à Tração
18.
Sci Rep ; 14(1): 5168, 2024 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-38431641

RESUMO

Magnetic resonance imaging is a medical imaging technique to create comprehensive images of the tissues and organs in the body. This study presents an advanced approach for storing and compressing neuroimaging informatics technology initiative files, a standard format in magnetic resonance imaging. It is designed to enhance telemedicine services by facilitating efficient and high-quality communication between healthcare practitioners and patients. The proposed downsampling approach begins by opening the neuroimaging informatics technology initiative file as volumetric data and then planning it into several slice images. Then, the quantization hiding technique will be applied to each of the two consecutive slice images to generate the stego slice with the same size. This involves the following major steps: normalization, microblock generation, and discrete cosine transformation. Finally, it assembles the resultant stego slice images to produce the final neuroimaging informatics technology initiative file as volumetric data. The upsampling process, designed to be completely blind, reverses the downsampling steps to reconstruct the subsequent image slice accurately. The efficacy of the proposed method was evaluated using a magnetic resonance imaging dataset, focusing on peak signal-to-noise ratio, signal-to-noise ratio, structural similarity index, and Entropy as key performance metrics. The results demonstrate that the proposed approach not only significantly reduces file sizes but also maintains high image quality.


Assuntos
Compressão de Dados , Telemedicina , Humanos , Compressão de Dados/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Razão Sinal-Ruído
19.
BMC Genomics ; 25(1): 266, 2024 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-38461245

RESUMO

BACKGROUND: DNA storage has the advantages of large capacity, long-term stability, and low power consumption relative to other storage mediums, making it a promising new storage medium for multimedia information such as images. However, DNA storage has a low coding density and weak error correction ability. RESULTS: To achieve more efficient DNA storage image reconstruction, we propose DNA-QLC (QRes-VAE and Levenshtein code (LC)), which uses the quantized ResNet VAE (QRes-VAE) model and LC for image compression and DNA sequence error correction, thus improving both the coding density and error correction ability. Experimental results show that the DNA-QLC encoding method can not only obtain DNA sequences that meet the combinatorial constraints, but also have a net information density that is 2.4 times higher than DNA Fountain. Furthermore, at a higher error rate (2%), DNA-QLC achieved image reconstruction with an SSIM value of 0.917. CONCLUSIONS: The results indicate that the DNA-QLC encoding scheme guarantees the efficiency and reliability of the DNA storage system and improves the application potential of DNA storage for multimedia information such as images.


Assuntos
Algoritmos , Compressão de Dados , Reprodutibilidade dos Testes , DNA/genética , Compressão de Dados/métodos , Processamento de Imagem Assistida por Computador/métodos
20.
Bioinformatics ; 40(4)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38530800

RESUMO

MOTIVATION: The full automation of digital neuronal reconstruction from light microscopic images has long been impeded by noisy neuronal images. Previous endeavors to improve image quality can hardly get a good compromise between robustness and computational efficiency. RESULTS: We present the image enhancement pipeline named Neuronal Image Enhancement through Noise Disentanglement (NIEND). Through extensive benchmarking on 863 mouse neuronal images with manually annotated gold standards, NIEND achieves remarkable improvements in image quality such as signal-background contrast (40-fold) and background uniformity (10-fold), compared to raw images. Furthermore, automatic reconstructions on NIEND-enhanced images have shown significant improvements compared to both raw images and images enhanced using other methods. Specifically, the average F1 score of NIEND-enhanced reconstructions is 0.88, surpassing the original 0.78 and the second-ranking method, which achieved 0.84. Up to 52% of reconstructions from NIEND-enhanced images outperform all other four methods in F1 scores. In addition, NIEND requires only 1.6 s on average for processing 256 × 256 × 256-sized images, and images after NIEND attain a substantial average compression rate of 1% by LZMA. NIEND improves image quality and neuron reconstruction, providing potential for significant advancements in automated neuron morphology reconstruction of petascale. AVAILABILITY AND IMPLEMENTATION: The study is conducted based on Vaa3D and Python 3.10. Vaa3D is available on GitHub (https://github.com/Vaa3D). The proposed NIEND method is implemented in Python, and hosted on GitHub along with the testing code and data (https://github.com/zzhmark/NIEND). The raw neuronal images of mouse brains can be found at the BICCN's Brain Image Library (BIL) (https://www.brainimagelibrary.org). The detailed list and associated meta information are summarized in Supplementary Table S3.


Assuntos
Compressão de Dados , Neurônios , Animais , Camundongos , Tomografia Computadorizada por Raios X/métodos , Aumento da Imagem , Encéfalo , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA