Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 2.574
Filter
1.
Gigascience ; 132024 01 02.
Article in English | MEDLINE | ID: mdl-39028587

ABSTRACT

BACKGROUND: With the rise of large-scale genome sequencing projects, genotyping of thousands of samples has produced immense variant call format (VCF) files. It is becoming increasingly challenging to store, transfer, and analyze these voluminous files. Compression methods have been used to tackle these issues, aiming for both high compression ratio and fast random access. However, existing methods have not yet achieved a satisfactory compromise between these 2 objectives. FINDINGS: To address the aforementioned issue, we introduce GSC (Genotype Sparse Compression), a specialized and refined lossless compression tool for VCF files. In benchmark tests conducted across various open-source datasets, GSC showcased exceptional performance in genotype data compression. Compared with the industry's most advanced tools (namely, GBC and GTC), GSC achieved compression ratios that were higher by 26.9% to 82.4% over GBC and GTC on the datasets, respectively. In lossless compression scenarios, GSC also demonstrated robust performance, with compression ratios 1.5× to 6.5× greater than general-purpose tools like gzip, zstd, and BCFtools-a mode not supported by either GBC or GTC. Achieving such high compression ratios did require some reasonable trade-offs, including longer decompression times, with GSC being 1.2× to 2× slower than GBC, yet 1.1× to 1.4× faster than GTC. Moreover, GSC maintained decompression query speeds that were equivalent to its competitors. In terms of RAM usage, GSC outperformed both counterparts. Overall, GSC's comprehensive performance surpasses that of the most advanced technologies. CONCLUSION: GSC balances high compression ratios with rapid data access, enhancing genomic data management. It supports seamless PLINK binary format conversion, simplifying downstream analysis.


Subject(s)
Data Compression , Software , Data Compression/methods , Humans , Genotype , Computational Biology/methods , Algorithms , High-Throughput Nucleotide Sequencing/methods
2.
Hum Brain Mapp ; 45(11): e26795, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39045881

ABSTRACT

The architecture of the brain is too complex to be intuitively surveyable without the use of compressed representations that project its variation into a compact, navigable space. The task is especially challenging with high-dimensional data, such as gene expression, where the joint complexity of anatomical and transcriptional patterns demands maximum compression. The established practice is to use standard principal component analysis (PCA), whose computational felicity is offset by limited expressivity, especially at great compression ratios. Employing whole-brain, voxel-wise Allen Brain Atlas transcription data, here we systematically compare compressed representations based on the most widely supported linear and non-linear methods-PCA, kernel PCA, non-negative matrix factorisation (NMF), t-stochastic neighbour embedding (t-SNE), uniform manifold approximation and projection (UMAP), and deep auto-encoding-quantifying reconstruction fidelity, anatomical coherence, and predictive utility across signalling, microstructural, and metabolic targets, drawn from large-scale open-source MRI and PET data. We show that deep auto-encoders yield superior representations across all metrics of performance and target domains, supporting their use as the reference standard for representing transcription patterns in the human brain.


Subject(s)
Brain , Magnetic Resonance Imaging , Transcription, Genetic , Humans , Brain/diagnostic imaging , Brain/metabolism , Transcription, Genetic/physiology , Positron-Emission Tomography , Image Processing, Computer-Assisted/methods , Principal Component Analysis , Data Compression/methods , Atlases as Topic
3.
Bioinformatics ; 40(7)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38984796

ABSTRACT

MOTIVATION: The introduction of Deep Minds' Alpha Fold 2 enabled the prediction of protein structures at an unprecedented scale. AlphaFold Protein Structure Database and ESM Metagenomic Atlas contain hundreds of millions of structures stored in CIF and/or PDB formats. When compressed with a general-purpose utility like gzip, this translates to tens of terabytes of data, which hinders the effective use of predicted structures in large-scale analyses. RESULTS: Here, we present ProteStAr, a compressor dedicated to CIF/PDB, as well as supplementary PAE files. Its main contribution is a novel approach to predicting atom coordinates on the basis of the previously analyzed atoms. This allows efficient encoding of the coordinates, the largest component of the protein structure files. The compression is lossless by default, though the lossy mode with a controlled maximum error of coordinates reconstruction is also present. Compared to the competing packages, i.e. BinaryCIF, Foldcomp, PDC, our approach offers a superior compression ratio at established reconstruction accuracy. By the efficient use of threads at both compression and decompression stages, the algorithm takes advantage of the multicore architecture of current central processing units and operates with speeds of about 1 GB/s. The presence of Python and C++ API further increases the usability of the presented method. AVAILABILITY AND IMPLEMENTATION: The source code of ProteStAr is available at https://github.com/refresh-bio/protestar.


Subject(s)
Algorithms , Databases, Protein , Proteins , Software , Proteins/chemistry , Protein Conformation , Data Compression/methods , Computational Biology/methods
4.
Sci Rep ; 14(1): 17162, 2024 Jul 26.
Article in English | MEDLINE | ID: mdl-39060441

ABSTRACT

Cardiac monitoring systems in Internet of Things (IoT) healthcare, reliant on limited battery and computational capacity, need efficient local processing and wireless transmission for comprehensive analysis. Due to the power-intensive wireless transmission in IoT devices, ECG signal compression is essential to minimize data transfer. This paper presents a real-time, low-complexity algorithm for compressing electrocardiogram (ECG) signals. The algorithm uses just nine arithmetic operations per ECG sample point, generating a hybrid Pulse Width Modulation (PWM) signal storable in a compact 4-bit resolution format. Despite its simplicity, it performs comparably to existing methods in terms of Percentage Root-Mean-Square Difference (PRD) and space-saving while significantly reducing complexity and maintaining robustness against signal noise. It achieves an average Bit Compression Ratio (BCR) of 4 and space savings of 90.4% for ECG signals in the MIT-BIH database, with a PRD of 0.33% and a Quality Score (QS) of 12. The reconstructed signal shows no adverse effects on QRS complex detection and heart rate variability, preserving both the signal amplitude and periodicity. This efficient method for transferring ECG data from wearable devices enables real-time cardiac activity monitoring with reduced data storage requirements. Its versatility suggests potential broader applications, extending to compression of various signal types beyond ECG.


Subject(s)
Algorithms , Data Compression , Electrocardiography , Signal Processing, Computer-Assisted , Electrocardiography/methods , Electrocardiography/instrumentation , Humans , Data Compression/methods , Heart Rate/physiology , Monitoring, Physiologic/methods , Monitoring, Physiologic/instrumentation
5.
Proc Natl Acad Sci U S A ; 121(28): e2320870121, 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38959033

ABSTRACT

Efficient storage and sharing of massive biomedical data would open up their wide accessibility to different institutions and disciplines. However, compressors tailored for natural photos/videos are rapidly limited for biomedical data, while emerging deep learning-based methods demand huge training data and are difficult to generalize. Here, we propose to conduct Biomedical data compRession with Implicit nEural Function (BRIEF) by representing the target data with compact neural networks, which are data specific and thus have no generalization issues. Benefiting from the strong representation capability of implicit neural function, BRIEF achieves 2[Formula: see text]3 orders of magnitude compression on diverse biomedical data at significantly higher fidelity than existing techniques. Besides, BRIEF is of consistent performance across the whole data volume, and supports customized spatially varying fidelity. BRIEF's multifold advantageous features also serve reliable downstream tasks at low bandwidth. Our approach will facilitate low-bandwidth data sharing and promote collaboration and progress in the biomedical field.


Subject(s)
Information Dissemination , Neural Networks, Computer , Humans , Information Dissemination/methods , Data Compression/methods , Deep Learning , Biomedical Research/methods
6.
Magn Reson Med ; 92(4): 1363-1375, 2024 Oct.
Article in English | MEDLINE | ID: mdl-38860514

ABSTRACT

PURPOSE: Hyperpolarized 129Xe MRI benefits from non-Cartesian acquisitions that sample k-space efficiently and rapidly. However, their reconstructions are complex and burdened by decay processes unique to hyperpolarized gas. Currently used gridded reconstructions are prone to artifacts caused by magnetization decay and are ill-suited for undersampling. We present a compressed sensing (CS) reconstruction approach that incorporates magnetization decay in the forward model, thereby producing images with increased sharpness and contrast, even in undersampled data. METHODS: Radio-frequency, T1, and T 2 * $$ {\mathrm{T}}_2^{\ast } $$ decay processes were incorporated into the forward model and solved using iterative methods including CS. The decay-modeled reconstruction was validated in simulations and then tested in 2D/3D-spiral ventilation and 3D-radial gas-exchange MRI. Quantitative metrics including apparent-SNR and sharpness were compared between gridded, CS, and twofold undersampled CS reconstructions. Observations were validated in gas-exchange data collected from 15 healthy and 25 post-hematopoietic-stem-cell-transplant participants. RESULTS: CS reconstructions in simulations yielded images with threefold increases in accuracy. CS increased sharpness and contrast for ventilation in vivo imaging and showed greater accuracy for undersampled acquisitions. CS improved gas-exchange imaging, particularly in the dissolved-phase where apparent-SNR improved, and structure was made discernable. Finally, CS showed repeatability in important global gas-exchange metrics including median dissolved-gas signal ratio and median angle between real/imaginary components. CONCLUSION: A non-Cartesian CS reconstruction approach that incorporates hyperpolarized 129Xe decay processes is presented. This approach enables improved image sharpness, contrast, and overall image quality in addition to up-to threefold undersampling. This contribution benefits all hyperpolarized gas MRI through improved accuracy and decreased scan durations.


Subject(s)
Algorithms , Computer Simulation , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Xenon Isotopes , Magnetic Resonance Imaging/methods , Humans , Image Processing, Computer-Assisted/methods , Male , Signal-To-Noise Ratio , Female , Imaging, Three-Dimensional/methods , Adult , Phantoms, Imaging , Artifacts , Data Compression/methods , Reproducibility of Results , Lung/diagnostic imaging , Contrast Media/chemistry
7.
Bioinformatics ; 40(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38759114

ABSTRACT

MOTIVATION: The quality scores data (QSD) account for 70% in compressed FastQ files obtained from the short and long reads sequencing technologies. Designing effective compressors for QSD that counterbalance compression ratio, time cost, and memory consumption is essential in scenarios such as large-scale genomics data sharing and long-term data backup. This study presents a novel parallel lossless QSD-dedicated compression algorithm named PQSDC, which fulfills the above requirements well. PQSDC is based on two core components: a parallel sequences-partition model designed to reduce peak memory consumption and time cost during compression and decompression processes, as well as a parallel four-level run-length prediction mapping model to enhance compression ratio. Besides, the PQSDC algorithm is also designed to be highly concurrent using multicore CPU clusters. RESULTS: We evaluate PQSDC and four state-of-the-art compression algorithms on 27 real-world datasets, including 61.857 billion QSD characters and 632.908 million QSD sequences. (1) For short reads, compared to baselines, the maximum improvement of PQSDC reaches 7.06% in average compression ratio, and 8.01% in weighted average compression ratio. During compression and decompression, the maximum total time savings of PQSDC are 79.96% and 84.56%, respectively; the maximum average memory savings are 68.34% and 77.63%, respectively. (2) For long reads, the maximum improvement of PQSDC reaches 12.51% and 13.42% in average and weighted average compression ratio, respectively. The maximum total time savings during compression and decompression are 53.51% and 72.53%, respectively; the maximum average memory savings are 19.44% and 17.42%, respectively. (3) Furthermore, PQSDC ranks second in compression robustness among the tested algorithms, indicating that it is less affected by the probability distribution of the QSD collections. Overall, our work provides a promising solution for QSD parallel compression, which balances storage cost, time consumption, and memory occupation primely. AVAILABILITY AND IMPLEMENTATION: The proposed PQSDC compressor can be downloaded from https://github.com/fahaihi/PQSDC.


Subject(s)
Algorithms , Data Compression , Data Compression/methods , Genomics/methods , High-Throughput Nucleotide Sequencing/methods , Sequence Analysis, DNA/methods , Software , Humans
8.
Magn Reson Med ; 92(3): 1232-1247, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38748852

ABSTRACT

PURPOSE: We present SCAMPI (Sparsity Constrained Application of deep Magnetic resonance Priors for Image reconstruction), an untrained deep Neural Network for MRI reconstruction without previous training on datasets. It expands the Deep Image Prior approach with a multidomain, sparsity-enforcing loss function to achieve higher image quality at a faster convergence speed than previously reported methods. METHODS: Two-dimensional MRI data from the FastMRI dataset with Cartesian undersampling in phase-encoding direction were reconstructed for different acceleration rates for single coil and multicoil data. RESULTS: The performance of our architecture was compared to state-of-the-art Compressed Sensing methods and ConvDecoder, another untrained Neural Network for two-dimensional MRI reconstruction. SCAMPI outperforms these by better reducing undersampling artifacts and yielding lower error metrics in multicoil imaging. In comparison to ConvDecoder, the U-Net architecture combined with an elaborated loss-function allows for much faster convergence at higher image quality. SCAMPI can reconstruct multicoil data without explicit knowledge of coil sensitivity profiles. Moreover, it is a novel tool for reconstructing undersampled single coil k-space data. CONCLUSION: Our approach avoids overfitting to dataset features, that can occur in Neural Networks trained on databases, because the network parameters are tuned only on the reconstruction data. It allows better results and faster reconstruction than the baseline untrained Neural Network approach.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Neural Networks, Computer , Magnetic Resonance Imaging/methods , Humans , Image Processing, Computer-Assisted/methods , Artifacts , Brain/diagnostic imaging , Data Compression/methods
9.
J Comput Biol ; 31(6): 524-538, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38820168

ABSTRACT

An essential task in computational genomics involves transforming input sequences into their constituent k-mers. The quest for an efficient representation of k-mer sets is crucial for enhancing the scalability of bioinformatic analyses. One widely used method involves converting the k-mer set into a de Bruijn graph (dBG), followed by seeking a compact graph representation via the smallest path cover. This study introduces USTAR* (Unitig STitch Advanced constRuction), a tool designed to compress both a set of k-mers and their associated counts. USTAR leverages the connectivity and density of dBGs, enabling a more efficient path selection for constructing the path cover. The efficacy of USTAR is demonstrated through its application in compressing real read data sets. USTAR improves the compression achieved by UST (Unitig STitch), the best algorithm, by percentages ranging from 2.3% to 26.4%, depending on the k-mer size, and it is up to 7× times faster.


Subject(s)
Algorithms , Data Compression , Genomics , Data Compression/methods , Genomics/methods , Software , Computational Biology/methods , Humans , Sequence Analysis, DNA/methods
10.
Commun Biol ; 7(1): 553, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38724695

ABSTRACT

For the last two decades, the amount of genomic data produced by scientific and medical applications has been growing at a rapid pace. To enable software solutions that analyze, process, and transmit these data in an efficient and interoperable way, ISO and IEC released the first version of the compression standard MPEG-G in 2019. However, non-proprietary implementations of the standard are not openly available so far, limiting fair scientific assessment of the standard and, therefore, hindering its broad adoption. In this paper, we present Genie, to the best of our knowledge the first open-source encoder that compresses genomic data according to the MPEG-G standard. We demonstrate that Genie reaches state-of-the-art compression ratios while offering interoperability with any other standard-compliant decoder independent from its manufacturer. Finally, the ISO/IEC ecosystem ensures the long-term sustainability and decodability of the compressed data through the ISO/IEC-supported reference decoder.


Subject(s)
Data Compression , Genomics , Software , Genomics/methods , Data Compression/methods , Humans
11.
J Neural Eng ; 21(3)2024 May 16.
Article in English | MEDLINE | ID: mdl-38718785

ABSTRACT

Objective.Recently, the demand for wearable devices using electroencephalography (EEG) has increased rapidly in many fields. Due to its volume and computation constraints, wearable devices usually compress and transmit EEG to external devices for analysis. However, current EEG compression algorithms are not tailor-made for wearable devices with limited computing and storage. Firstly, the huge amount of parameters makes it difficult to apply in wearable devices; secondly, it is tricky to learn EEG signals' distribution law due to the low signal-to-noise ratio, which leads to excessive reconstruction error and suboptimal compression performance.Approach.Here, a feature enhanced asymmetric encoding-decoding network is proposed. EEG is encoded with a lightweight model, and subsequently decoded with a multi-level feature fusion network by extracting the encoded features deeply and reconstructing the signal through a two-branch structure.Main results.On public EEG datasets, motor imagery and event-related potentials, experimental results show that the proposed method has achieved the state of the art compression performance. In addition, the neural representation analysis and the classification performance of the reconstructed EEG signals also show that our method tends to retain more task-related information as the compression ratio increases and retains reliable discriminative information after EEG compression.Significance.This paper tailors an asymmetric EEG compression method for wearable devices that achieves state-of-the-art compression performance in a lightweight manner, paving the way for the application of EEG-based wearable devices.


Subject(s)
Data Compression , Electroencephalography , Electroencephalography/methods , Data Compression/methods , Humans , Wearable Electronic Devices , Neural Networks, Computer , Algorithms , Signal Processing, Computer-Assisted , Imagination/physiology
12.
Sci Rep ; 14(1): 10560, 2024 05 08.
Article in English | MEDLINE | ID: mdl-38720020

ABSTRACT

The research on video analytics especially in the area of human behavior recognition has become increasingly popular recently. It is widely applied in virtual reality, video surveillance, and video retrieval. With the advancement of deep learning algorithms and computer hardware, the conventional two-dimensional convolution technique for training video models has been replaced by three-dimensional convolution, which enables the extraction of spatio-temporal features. Specifically, the use of 3D convolution in human behavior recognition has been the subject of growing interest. However, the increased dimensionality has led to challenges such as the dramatic increase in the number of parameters, increased time complexity, and a strong dependence on GPUs for effective spatio-temporal feature extraction. The training speed can be considerably slow without the support of powerful GPU hardware. To address these issues, this study proposes an Adaptive Time Compression (ATC) module. Functioning as an independent component, ATC can be seamlessly integrated into existing architectures and achieves data compression by eliminating redundant frames within video data. The ATC module effectively reduces GPU computing load and time complexity with negligible loss of accuracy, thereby facilitating real-time human behavior recognition.


Subject(s)
Algorithms , Data Compression , Video Recording , Humans , Data Compression/methods , Human Activities , Deep Learning , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated/methods
13.
Genome Biol ; 25(1): 106, 2024 04 25.
Article in English | MEDLINE | ID: mdl-38664753

ABSTRACT

Centrifuger is an efficient taxonomic classification method that compares sequencing reads against a microbial genome database. In Centrifuger, the Burrows-Wheeler transformed genome sequences are losslessly compressed using a novel scheme called run-block compression. Run-block compression achieves sublinear space complexity and is effective at compressing diverse microbial databases like RefSeq while supporting fast rank queries. Combining this compression method with other strategies for compacting the Ferragina-Manzini (FM) index, Centrifuger reduces the memory footprint by half compared to other FM-index-based approaches. Furthermore, the lossless compression and the unconstrained match length help Centrifuger achieve greater accuracy than competing methods at lower taxonomic levels.


Subject(s)
Data Compression , Metagenomics , Data Compression/methods , Metagenomics/methods , Software , Genome, Microbial , Genome, Bacterial , Sequence Analysis, DNA/methods
14.
PLoS One ; 19(4): e0301622, 2024.
Article in English | MEDLINE | ID: mdl-38630695

ABSTRACT

This paper proposes a reinforced concrete (RC) boundary beam-wall system that requires less construction material and a smaller floor height compared to the conventional RC transfer girder system. The structural performance of this system subjected to axial compression was evaluated by performing a structural test on four specimens of 1/2 scale. In addition, three-dimensional nonlinear finite element analysis was also performed to verify the effectiveness of the boundary beam-wall system. Three test parameters such as the lower wall length-to-upper wall length ratio, lower wall thickness, and stirrup details of the lower wall were considered. The load-displacement curve was plotted for each specimen and its failure mode was identified. The test results showed that decrease in the lower wall length-to-upper wall length ratio significantly reduced the peak strength of the boundary beam-wall system and difference in upper and lower wall thicknesses resulted in lateral bending caused by eccentricity in the out-of-plane direction. Additionally, incorporating cross-ties and reducing stirrup spacing in the lower wall significantly improved initial stiffness and peak strength, effectively minimizing stress concentration.


Subject(s)
Construction Materials , Data Compression , Finite Element Analysis , Physical Phenomena
15.
J Proteome Res ; 23(5): 1702-1712, 2024 May 03.
Article in English | MEDLINE | ID: mdl-38640356

ABSTRACT

Several lossy compressors have achieved superior compression rates for mass spectrometry (MS) data at the cost of storage precision. Currently, the impacts of precision losses on MS data processing have not been thoroughly evaluated, which is critical for the future development of lossy compressors. We first evaluated different storage precision (32 bit and 64 bit) in lossless mzML files. We then applied 10 truncation transformations to generate precision-lossy files: five relative errors for intensities and five absolute errors for m/z values. MZmine3 and XCMS were used for feature detection and GNPS for compound annotation. Lastly, we compared Precision, Recall, F1 - score, and file sizes between lossy files and lossless files under different conditions. Overall, we revealed that the discrepancy between 32 and 64 bit precision was under 1%. We proposed an absolute m/z error of 10-4 and a relative intensity error of 2 × 10-2, adhering to a 5% error threshold (F1 - scores above 95%). For a stricter 1% error threshold (F1 - scores above 99%), an absolute m/z error of 2 × 10-5 and a relative intensity error of 2 × 10-3 were advised. This guidance aims to help researchers improve lossy compression algorithms and minimize the negative effects of precision losses on downstream data processing.


Subject(s)
Data Compression , Mass Spectrometry , Metabolomics , Mass Spectrometry/methods , Metabolomics/methods , Metabolomics/statistics & numerical data , Data Compression/methods , Software , Humans , Algorithms
16.
J Biomed Opt ; 29(Suppl 1): S11529, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38650979

ABSTRACT

Significance: Compressed sensing (CS) uses special measurement designs combined with powerful mathematical algorithms to reduce the amount of data to be collected while maintaining image quality. This is relevant to almost any imaging modality, and in this paper we focus on CS in photoacoustic projection imaging (PAPI) with integrating line detectors (ILDs). Aim: Our previous research involved rather general CS measurements, where each ILD can contribute to any measurement. In the real world, however, the design of CS measurements is subject to practical constraints. In this research, we aim at a CS-PAPI system where each measurement involves only a subset of ILDs, and which can be implemented in a cost-effective manner. Approach: We extend the existing PAPI with a self-developed CS unit. The system provides structured CS matrices for which the existing recovery theory cannot be applied directly. A random search strategy is applied to select the CS measurement matrix within this class for which we obtain exact sparse recovery. Results: We implement a CS PAPI system for a compression factor of 4:3, where specific measurements are made on separate groups of 16 ILDs. We algorithmically design optimal CS measurements that have proven sparse CS capabilities. Numerical experiments are used to support our results. Conclusions: CS with proven sparse recovery capabilities can be integrated into PAPI, and numerical results support this setup. Future work will focus on applying it to experimental data and utilizing data-driven approaches to enhance the compression factor and generalize the signal class.


Subject(s)
Algorithms , Equipment Design , Image Processing, Computer-Assisted , Photoacoustic Techniques , Photoacoustic Techniques/methods , Photoacoustic Techniques/instrumentation , Image Processing, Computer-Assisted/methods , Data Compression/methods , Phantoms, Imaging
17.
J Acoust Soc Am ; 155(4): 2589-2602, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38607268

ABSTRACT

The processing and perception of amplitude modulation (AM) in the auditory system reflect a frequency-selective process, often described as a modulation filterbank. Previous studies on perceptual AM masking reported similar results for older listeners with hearing impairment (HI listeners) and young listeners with normal hearing (NH listeners), suggesting no effects of age or hearing loss on AM frequency selectivity. However, recent evidence has shown that age, independently of hearing loss, adversely affects AM frequency selectivity. Hence, this study aimed to disentangle the effects of hearing loss and age. A simultaneous AM masking paradigm was employed, using a sinusoidal carrier at 2.8 kHz, narrowband noise modulation maskers, and target modulation frequencies of 4, 16, 64, and 128 Hz. The results obtained from young (n = 3, 24-30 years of age) and older (n = 10, 63-77 years of age) HI listeners were compared to previously obtained data from young and older NH listeners. Notably, the HI listeners generally exhibited lower (unmasked) AM detection thresholds and greater AM frequency selectivity than their NH counterparts in both age groups. Overall, the results suggest that age negatively affects AM frequency selectivity for both NH and HI listeners, whereas hearing loss improves AM detection and AM selectivity, likely due to the loss of peripheral compression.


Subject(s)
Data Compression , Deafness , Hearing Loss , Humans , Perceptual Masking
18.
Sensors (Basel) ; 24(7)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38610365

ABSTRACT

High-quality cardiopulmonary resuscitation (CPR) and training are important for successful revival during out-of-hospital cardiac arrest (OHCA). However, existing training faces challenges in quantifying each aspect. This study aimed to explore the possibility of using a three-dimensional motion capture system to accurately and effectively assess CPR operations, particularly about the non-quantified arm postures, and analyze the relationship among them to guide students to improve their performance. We used a motion capture system (Mars series, Nokov, China) to collect compression data about five cycles, recording dynamic data of each marker point in three-dimensional space following time and calculating depth and arm angles. Most unstably deviated to some extent from the standard, especially for the untrained students. Five data sets for each parameter per individual all revealed statistically significant differences (p < 0.05). The correlation between Angle 1' and Angle 2' for trained (rs = 0.203, p < 0.05) and untrained students (rs = -0.581, p < 0.01) showed a difference. Their performance still needed improvement. When conducting assessments, we should focus on not only the overall performance but also each compression. This study provides a new perspective for quantifying compression parameters, and future efforts should continue to incorporate new parameters and analyze the relationship among them.


Subject(s)
Cardiopulmonary Resuscitation , Data Compression , Humans , Feasibility Studies , Motion Capture , China
19.
PLoS One ; 19(4): e0288296, 2024.
Article in English | MEDLINE | ID: mdl-38557995

ABSTRACT

Network traffic prediction is an important network monitoring method, which is widely used in network resource optimization and anomaly detection. However, with the increasing scale of networks and the rapid development of 5-th generation mobile networks (5G), traditional traffic forecasting methods are no longer applicable. To solve this problem, this paper applies Long Short-Term Memory (LSTM) network, data augmentation, clustering algorithm, model compression, and other technologies, and proposes a Cluster-based Lightweight PREdiction Model (CLPREM), a method for real-time traffic prediction of 5G mobile networks. We have designed unique data processing and classification methods to make CLPREM more robust than traditional neural network models. To demonstrate the effectiveness of the method, we designed and conducted experiments in a variety of settings. Experimental results confirm that CLPREM can obtain higher accuracy than traditional prediction schemes with less time cost. To address the occasional anomaly prediction issue in CLPREM, we propose a preprocessing method that minimally impacts time overhead. This approach not only enhances the accuracy of CLPREM but also effectively resolves the real-time traffic prediction challenge in 5G mobile networks.


Subject(s)
Data Compression , Neural Networks, Computer , Algorithms , Forecasting
20.
Eur J Radiol ; 175: 111418, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38490130

ABSTRACT

PURPOSE: To investigate the potential of combining Compressed Sensing (CS) and a newly developed AI-based super resolution reconstruction prototype consisting of a series of convolutional neural networks (CNN) for a complete five-minute 2D knee MRI protocol. METHODS: In this prospective study, 20 volunteers were examined using a 3T-MRI-scanner (Ingenia Elition X, Philips). Similar to clinical practice, the protocol consists of a fat-saturated 2D-proton-density-sequence in coronal, sagittal and transversal orientation as well as a sagittal T1-weighted sequence. The sequences were acquired with two different resolutions (standard and low resolution) and the raw data reconstructed with two different reconstruction algorithms: a conventional Compressed SENSE (CS) and a new CNN-based algorithm for denoising and subsequently to interpolate and therewith increase the sharpness of the image (CS-SuperRes). Subjective image quality was evaluated by two blinded radiologists reviewing 8 criteria on a 5-point Likert scale and signal-to-noise ratio calculated as an objective parameter. RESULTS: The protocol reconstructed with CS-SuperRes received higher ratings than the time-equivalent CS reconstructions, statistically significant especially for low resolution acquisitions (e.g., overall image impression: 4.3 ±â€¯0.4 vs. 3.4 ±â€¯0.4, p < 0.05). CS-SuperRes reconstructions for the low resolution acquisition were comparable to traditional CS reconstructions with standard resolution for all parameters, achieving a scan time reduction from 11:01 min to 4:46 min (57 %) for the complete protocol (e.g. overall image impression: 4.3 ±â€¯0.4 vs. 4.0 ±â€¯0.5, p < 0.05). CONCLUSION: The newly-developed AI-based reconstruction algorithm CS-SuperRes allows to reduce scan time by 57% while maintaining unchanged image quality compared to the conventional CS reconstruction.


Subject(s)
Algorithms , Healthy Volunteers , Knee Joint , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Male , Female , Prospective Studies , Adult , Knee Joint/diagnostic imaging , Data Compression/methods , Neural Networks, Computer , Middle Aged , Signal-To-Noise Ratio , Image Interpretation, Computer-Assisted/methods , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL