Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.995
Filtrar
1.
BMC Bioinformatics ; 22(1): 68, 2021 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-33579189

RESUMO

BACKGROUND: The clustering of data produced by liquid chromatography coupled to mass spectrometry analyses (LC-MS data) has recently gained interest to extract meaningful chemical or biological patterns. However, recent instrumental pipelines deliver data which size, dimensionality and expected number of clusters are too large to be processed by classical machine learning algorithms, so that most of the state-of-the-art relies on single pass linkage-based algorithms. RESULTS: We propose a clustering algorithm that solves the powerful but computationally demanding kernel k-means objective function in a scalable way. As a result, it can process LC-MS data in an acceptable time on a multicore machine. To do so, we combine three essential features: a compressive data representation, Nyström approximation and a hierarchical strategy. In addition, we propose new kernels based on optimal transport, which interprets as intuitive similarity measures between chromatographic elution profiles. CONCLUSIONS: Our method, referred to as CHICKN, is evaluated on proteomics data produced in our lab, as well as on benchmark data coming from the literature. From a computational viewpoint, it is particularly efficient on raw LC-MS data. From a data analysis viewpoint, it provides clusters which differ from those resulting from state-of-the-art methods, while achieving similar performances. This highlights the complementarity of differently principle algorithms to extract the best from complex LC-MS data.


Assuntos
Algoritmos , Análise por Conglomerados , Peptídeos , Proteômica , Cromatografia Líquida , Compressão de Dados , Espectrometria de Massas , Peptídeos/química , Proteômica/métodos
2.
Nan Fang Yi Ke Da Xue Xue Bao ; 41(2): 279-284, 2021 Feb 25.
Artigo em Chinês | MEDLINE | ID: mdl-33624603

RESUMO

In order to reduce the energy loss during data transmission and storage in the Internet of Things system and improve the transmission efficiency of fetal heart rate data to allow real-time monitoring of the fetus, we used a convolutional codec network (CC-Net) to compress the data. The network has two modules: the encoding and decoding modules. The original data are compressed in the encoding module and reconstructed in the decoding module. The internal parameters are continuously updated using the mean square error of the original and the reconstructed signals to minimize the error to obtain effectively compressed data in the encoding module. In this study, the compression ratio of fetal heart rate signals using this method reached 12.07%, and the error between the reconstructed and original signals was 0.03. The proposed CC-Net can achieve a very low compression ratio for fetal heart rate compression while ensuring a high similarity between the reconstructed and the original signals to retain important information in fetal heart rate signals.


Assuntos
Compressão de Dados , Frequência Cardíaca Fetal , Algoritmos , Feminino , Humanos , Gravidez , Processamento de Sinais Assistido por Computador
3.
Ultrasonics ; 112: 106354, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33450526

RESUMO

Compressed sensing (CS) has been adapted to synthetic aperture (SA) ultrasound imaging to improve the frame-rate of the system. Recently, we proposed a novel CS framework using Gaussian under-sampling to reduce the number of receive elements in multi-element synthetic transmit aperture (MSTA) imaging. However, that framework requires different receive elements to be chosen randomly for each transmission, which may add to practical implementation challenges. Modifying the scheme to employ the same set of receive elements for all transmissions of MSTA leads to degradation of the recovered image quality. Therefore, this work proposes a novel sampling scheme based on a genetic algorithm (GA), which optimally chooses the receive element positions once and uses it for all the transmission of MSTA. The CS performance using GA sampling schemes is evaluated against the previously proposed CS framework on in-vitro and in-vivo datasets. The obtained results suggest that not only does the GA-based approach allows the use of the same set of sparse receive elements for each transmit, but also leads to the lowest CS recovery error (NRMSE) and 14% overall improvement in image contrast, in comparison to the previously-proposed Gaussian sampling scheme. Thus, using the CS framework along with GA, can potentially reduce the complexity in implementation of CS-framework to MSTA based systems.


Assuntos
Algoritmos , Compressão de Dados/métodos , Ultrassonografia/métodos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Transdutores
4.
Ultrasonics ; 110: 106229, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33091651

RESUMO

Medical ultrasound images are inherently noised with speckle noise, which may interfere with Computer Aided Diagnostics (CAD) tasks, such as automatic segmentation. A compression and speckle de-noising method is proposed and tested on real clinical breast and fetal ultrasound images. The proposed algorithm is based on the optimization of quantization coefficients when applying Wavelet representation on the image, where the optimization is held such that a pre-defined mathematical fidelity criterion with respect to a desired de-speckled image is obtained. The proposed algorithm yields effective speckle reduction whilst preserving the edges in the images, with a reduced computational burden compared to other existing state-of-the-art methods, such as Optimal Bayesian Non-Local Means (OBNLM). In addition, the images are simultaneously compressed to a target bit-rate. The proposed algorithm is evaluated using both objective mathematical fidelity criteria (such as Structural Similarity and Edge Preserve) as well as subjective radiologists tests. The experimental results demonstrate the ability of the proposed method to achieve de-speckled images with compression ratios of approximately 30:1, whilst obtaining competitive subjective as well as objective fidelity measures with respect to the desired de-speckled images.


Assuntos
Algoritmos , Compressão de Dados/métodos , Aumento da Imagem/métodos , Ultrassonografia Mamária , Ultrassonografia Pré-Natal , Feminino , Humanos
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 2031-2034, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018403

RESUMO

Normalized cross-correlation (NCC) function used in ultrasound strain imaging can get corrupted due to signal decorrelation inducing large displacement errors. Bayesian regularization has been applied in an iterative manner to regularize the NCC function and to reduce estimation variance and peak-hopping errors. However, incorrect choice of the number of iterations can lead to over-regularization errors. In this paper, we propose the use of log compression of regularized NCC function to improve subsample estimation. Performance of parabolic interpolation before and after log compression of the regularized NCC function were compared in numerical simulations of uniform and inclusion phantoms. Significant improvement was achieved with the proposed scheme for lateral estimation results. For example, lateral signal-to-noise ratio (SNR) was 10 dB higher after log compression at 3% strain in a uniform phantom. Lateral contrast-to-noise ratio (CNR) was 1.81 dB higher with proposed method at 3% strain in inclusion phantom. No significant difference was observed in axial estimation due to presence of phase information and high sampling frequency. Our results suggest that this simple approach makes Bayesian regularization robust to over-regularization artifacts.


Assuntos
Compressão de Dados , Técnicas de Imagem por Elasticidade , Algoritmos , Teorema de Bayes , Ultrassonografia
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 3489-3492, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018755

RESUMO

In this paper a new compression technique based on the discrete Tchebichef transform is presented. To comply with strict on-implant hardware implementation requirements, such as low power dissipation and small silicon area consumption, the discrete Tchebichef transform is modified and truncated. An algorithm is proposed to generate approximate transform matrices capable of truncation without suffering from destructive energy leakage among the coefficients. This is achieved by preserving orthogonality of the basis functions that convey majority portion of the signal energy. Based on the presented algorithm, a new truncated transformation matrix is proposed, which reduces the hardware complexity by up to 74% compared to that of the original transform. Hardware implementation of the proposed neural signal compression technique is prototyped using standard digital hardware. With pre-recorded neural signals as the input, compression rate of 26.15 is achieved while the root-mean-square of error is kept as low as 1.1%.Clinical Relevance- This paper proposes a technique for data compression in high-density neural recording brain implants, along with a power- and area-efficient hardware implementation. From among clinical applications of such implants one can point to neuro-prostheses, and brain-machine interfaces for therapeutic purposes.


Assuntos
Interfaces Cérebro-Computador , Compressão de Dados , Algoritmos , Computadores , Registros
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 4318-4321, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018951

RESUMO

This paper investigates the effectiveness of four Huffman-based compression schemes for different intracortical neural signals and sample resolutions. The motivation is to find effective lossless, low-complexity data compression schemes for Wireless Intracortical Brain-Machine Interfaces (WI-BMI). The considered schemes include pre-trained Lone 1st and 2nd order encoding [1], pre-trained Delta encoding, and pre-trained Linear Neural Network Time (LNNT) encoding [2]. Maximum codeword-length limited versions are also considered to protect against overfit to training data. The considered signals are the Extracellular Action Potential signal, the Entire Spiking Activity signal, and the Local Field Potential signal. Sample resolutions of 5 to 13 bits are considered. The result show that overfit-protection dramatically improves compression, especially at higher sample resolutions. Across signals, 2nd order encoding generally performed best at lower sample resolutions, and 1st order, Delta and LNNT encoding performed best at higher sample resolutions. The proposed methods should generalise to other remote sensing applications where the distribution of the sensed data can be estimated a priori.


Assuntos
Compressão de Dados , Redes Neurais de Computação , Fenômenos Físicos
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5398-5401, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33019201

RESUMO

Atrial Fibrillation (AF) is a cardiac condition resulting from uncoordinated contraction of the atria which may lead to an increase in the risk of heart attacks, strokes, and death. AF symptoms may go undetected and may require longterm monitoring of electrocardiogram (ECG) to be detected. Long-term ECG monitoring can generate a large amount of data which can increase power, storage, and the wireless transmission bandwidth of monitoring devices. Compressive Sensing (CS) is compression technique at the sampling stage which may save power, storage, and wireless bandwidth of monitoring devices. The reconstruction of compressive sensed ECG is a computationally expensive operation; therefore, detection of AF in compressive sensed ECG is warranted. This paper presents preliminary results of using deep learning to detect AF in deterministic compressive sensed ECG. MobileNetV2 convolutional neural network (CNN) was used in this paper. Transfer learning was utilized to leverage a pre-trained CNN with the final two layers retrained using 24 records from the Long-Term Atrial Fibrillation Database. The Short-Term Fourier Transform was used to generate spectrograms that were fed to the CNN. The CNN was tested on the MIT-BIH Atrial Fibrillation Database at the uncompressed, 50%, 75%, and 95% compressed ECG. The performance of the CNN was evaluated using weighted average precision (AP) and area under the curve (AUC) of the receiver operator curve (ROC). The CNN had AP of 0.80, 0.70, 0.70, and 0.57 at uncompressed, 50%, 75%, and 95% compression levels. The AUC was 0.87, 0.78, 0.79, and 0.75 at each compression level. The preliminary results show promise for using deep learning to detect AF in compressive sensed ECG.Clinical Relevance-This paper confirms that AF can be detected in compressive sensed ECG using deep learning, This will facilitate long-term ECG monitoring using wearable devices and will reduce adverse complications resulting from undiagnosed AF.


Assuntos
Fibrilação Atrial , Compressão de Dados , Fibrilação Atrial/diagnóstico , Eletrocardiografia , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5917-5920, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33019321

RESUMO

A challenge to solve when analyzing multimorbidity patterns in elderly people is the management of a high number of characteristics associated with each patient. The main variables to study multimorbidity are diseases, however other variables should be considered to better classify the people included in each pattern. Age, sex, social class and medication are frequently used in the typing of each multimorbidity pattern. Subsequently the cardinality of the set of features that characterize a patient is very high and normally, the set is compressed to obtain a patient vector of new variables whose dimension is noticeably smaller than that of the initial set. To minimize the loss of information by compression, traditionally Principal Component Analysis (PCA) based projection techniques have been used, which although they are generally a good option, the projection is linear, which somehow reduces its flexibility and limits the performance. As an alternative to the PCA based techniques, in this paper, it is proposed to use autoencoders, and it is shown the improvement in the obtained multimorbidity patterns from the compressed database, when the registered data on about a million patients (5 years' follow-up) are processed. This work demonstrates that autoencoders retain a larger amount of information in each pattern and results are more consistent with clinical experience than other approaches frequently found in the literature.Clinical relevance- From an epidemiological perspective, the contribution is relevant, since it allows for a more precise analysis of multimorbidity patterns, leading to better approaches to patient health strategies.


Assuntos
Compressão de Dados , Técnicas Projetivas , Idoso , Bases de Dados Factuais , Humanos , Multimorbidade , Análise de Componente Principal
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5925-5928, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33019323

RESUMO

Photoplethysmography (PPG) has been widely involved in health monitoring for clinical medicine and wearable devices. To make full use of PPG signals for diagnosis and health care, raw PPG waveforms have to be stored and transmitted in a storage and power-efficient way, which is data compression. In this study, we proposed a new approach for PPG compression using stochastic modeling. This new method models a single cardiac period of PPG waveform using two sets of Gaussian functions to fit the forward and backward waves of the PPG pulse, representing the signal with a few numbers of parameters that share high similarity inter cardiac periods. An adaptive quantization based on higher-order statistics of inter-cardiac- period parameters was then adopted to quantize continuous parameters into transmissive-friendly integers of different bits. Although further ASCII encoding was not applied in this research, comparison results on a wearable PPG dataset with 30 subjects show that the proposed approach can achieve a much higher compression ratio (up to 41 under 200 Samples/s for 18-bit data) than conventional delta modulation-based methods under clinical-acceptable recover quality, with percentage root-mean-square difference (PRD) lower than 9%. This algorithm may also have comparative results with state-of-the-art methods after introducing lossless encoding, which is hardly absent from the latter. This study indicates the high potential of using stochastic modeling in PPG compression, especially for reflective PPG collected by wearable devices where the amplitudes of signals can be significantly affected by respiration.Clinical Relevance-This research establishes a new approach of photoplethysmography compression, which contributes to remote and telehealth monitoring in wearable devices.


Assuntos
Compressão de Dados , Fotopletismografia , Algoritmos , Frequência Cardíaca , Processamento de Sinais Assistido por Computador
11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 6082-6085, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33019358

RESUMO

Smartphone-based compression-induced sensing system uses the light diffusion pattern to characterize the early-stage breast tumor noninvasively. The system is built on a smartphone and cloud platform to capture, transfer, and interface with the user. The compressed tissue's deforming pattern creates distinctive tactile images due to the size and hardness of the tumor. From the compression-induced images, we estimate the size of the tumor using projection analysis and the tumor's malignancy using the tissue deformation index ratio. Deformation index ratio is based on the changes of a healthy region over the tumorous region. By using the projection analysis, the human patient tumor size estimation resulted in 52.3% of the average error. For a small number (seven) of the feasibility test, the tumor's malignancy was classified based on the deformation index ratio with 67.0% of sensitivity and 100% specificity.


Assuntos
Neoplasias da Mama , Compressão de Dados , Smartphone , Mama , Neoplasias da Mama/diagnóstico , Humanos , Pressão
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 328-331, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33017995

RESUMO

One of the main challenges in sparse signal recovery in compressed sensing framework is determining the sparsity order. Most model order selection methods introduce a penalty term for the number of parameters, however do not consider the variance of the observation and measurement noise. Minimum Noiseless Description Length (MNDL), on the other hand, considers these factors and provides a more robust results in order selection. Nevertheless, it requires noise variance (equivalently SNR) estimate for the order selection procedure. In this paper, a new method is introduced to estimate the variance of the observation noise within the MNDL order selection method. The fully automated method simultaneously provides the SNR estimate and sparsity order and does not require any prior partial knowledge or assumption on the noise variance. Simulation results for ECG compression show advantages of the proposed automated MNDL over the existing approaches in the sense of parameter estimation error and SNR improvement.


Assuntos
Compressão de Dados , Processamento de Sinais Assistido por Computador , Algoritmos , Eletrocardiografia , Razão Sinal-Ruído
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 357-360, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018002

RESUMO

Automatic electrocardiogram (ECG) analysis for pacemaker patients is crucial for monitoring cardiac conditions and the effectiveness of cardiac resynchronization treatment. However, under the condition of energy-saving remote monitoring, the low-sampling-rate issue of an ECG device can lead to the miss detection of pacemaker spikes as well as incorrect analysis on paced rhythm and non-paced arrhythmias. To solve the issue, this paper proposed a novel system that applies the compressive sampling (CS) framework to sub-Nyquist acquire and reconstruct ECG, and then uses multi-dimensional feature-based deep learning to identify paced rhythm and non-paced arrhythmias. Simulation testing results on ECG databases and comparison with existing approaches demonstrate its effectiveness and outstanding performance for pacemaker ECG analysis.


Assuntos
Compressão de Dados , Marca-Passo Artificial , Arritmias Cardíacas/diagnóstico , Aprendizado Profundo , Eletrocardiografia , Humanos
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 580-583, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018055

RESUMO

Recently, classification from compressed physiological signals in compressed sensing has been successfully applied to cardiovascular disease monitoring. However, in real-time wearable electrocardiogram (ECG) monitoring, it is very difficult to directly obtain the heartbeats information from compressed ECG signals. Thus arrhythmia classification from compressed ECG signals has to be handled in fixed-length segments instead of individual heartbeats. An inevitable issue is that a fixed-length ECG segment may contain multiple different types of arrhythmia. As a result, it is not appropriate to represent the multi-type real arrhythmia with a single label. In this paper, we first introduce multiple labels into fixed-length compressed ECG segments to challenge the arrhythmia classification issue. Then, we propose a deep learning model, which can directly classify multiple different types of arrhythmia from fixed-length compressed ECG segments with the advantages of low time cost for data processing and relatively high classification accuracy at a high compression ratio. Experimental results on the MIT-BIH arrhythmia database show that the exact match rate of our proposed method has reached 96.03% at CR(Compression Ratio)=70%, 94.99% at CR=80% and 93.19% at CR=90%.


Assuntos
Compressão de Dados , Dispositivos Eletrônicos Vestíveis , Arritmias Cardíacas/diagnóstico , Eletrocardiografia , Frequência Cardíaca , Humanos
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 968-971, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018146

RESUMO

A compressor in hearing aid devices (HADs) is responsible for mapping the dynamic range of input signals to the residual dynamic range of hearing-impaired (HI) patients. Gains and parameters of the compressor are set according to the HI patient's preferences. In different surroundings depending upon noise level, the patient may seek to tune the parameters to improve performance. Traditionally, fitting of the hearing aids is done by an audiologist using hearing aid software and the HI patient's opinion at a clinic. In this paper, we propose a frequency-based multi-band compressor implemented as a smartphone application, which can be used as an alternative to that of the traditional HADs. The proposed solution allows the user to tune the compression parameters for each band along with a choice of compression speed and fitting strategy. Exploiting smartphone processing and hardware capabilities, the application can be used for bilateral hearing loss. The performance of this easy-to-use smartphone-based application is compared with traditional HADs using a hearing aid test system. Objective and subjective evaluations are also carried out to quantify the performance.


Assuntos
Compressão de Dados , Auxiliares de Audição , Perda Auditiva , Percepção da Fala , Perda Auditiva/terapia , Perda Auditiva Bilateral , Humanos
16.
PLoS One ; 15(8): e0228520, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32857775

RESUMO

Health advances are contingent on continuous development of new methods and approaches to foster data-driven discovery in the biomedical and clinical sciences. Open-science and team-based scientific discovery offer hope for tackling some of the difficult challenges associated with managing, modeling, and interpreting of large, complex, and multisource data. Translating raw observations into useful information and actionable knowledge depends on effective domain-independent reproducibility, area-specific replicability, data curation, analysis protocols, organization, management and sharing of health-related digital objects. This study expands the functionality and utility of an ensemble semi-supervised machine learning technique called Compressive Big Data Analytics (CBDA). Applied to high-dimensional data, CBDA (1) identifies salient features and key biomarkers enabling reliable and reproducible forecasting of binary, multinomial and continuous outcomes (i.e., feature mining); and (2) suggests the most accurate algorithms/models for predictive analytics of the observed data (i.e., model mining). The method relies on iterative subsampling, combines function optimization and statistical inference, and generates ensemble predictions for observed univariate outcomes. The novelty of this study is highlighted by a new and expanded set of CBDA features including (1) efficiently handling extremely large datasets (>100,000 cases and >1,000 features); (2) generalizing the internal and external validation steps; (3) expanding the set of base-learners for joint ensemble prediction; (4) introducing an automated selection of CBDA specifications; and (5) providing mechanisms to assess CBDA convergence, evaluate the prediction accuracy, and measure result consistency. To ground the mathematical model and the corresponding computational algorithm, CBDA 2.0 validation utilizes synthetic datasets as well as a population-wide census-like study. Specifically, an empirical validation of the CBDA technique is based on a translational health research using a large-scale clinical study (UK Biobank), which includes imaging, cognitive, and clinical assessment data. The UK Biobank archive presents several difficult challenges related to the aggregation, harmonization, modeling, and interrogation of the information. These problems are related to the complex longitudinal structure, variable heterogeneity, feature multicollinearity, incongruency, and missingness, as well as violations of classical parametric assumptions. Our results show the scalability, efficiency, and usability of CBDA to interrogate complex data into structural information leading to derived knowledge and translational action. Applying CBDA 2.0 to the UK Biobank case-study allows predicting various outcomes of interest, e.g., mood disorders and irritability, and suggests new and exciting avenues of evidence-based research in the context of identifying, tracking, and treating mental health and aging-related diseases. Following open-science principles, we share the entire end-to-end protocol, source-code, and results. This facilitates independent validation, result reproducibility, and team-based collaborative discovery.


Assuntos
Mineração de Dados/métodos , Ciência de Dados/métodos , Algoritmos , Big Data , Compressão de Dados , Humanos , Aprendizado de Máquina , Metanálise como Assunto , Modelos Teóricos , Fenômenos Físicos , Prognóstico , Reprodutibilidade dos Testes , Software
17.
PLoS One ; 15(8): e0236089, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32790775

RESUMO

Multiscale geometric analysis (MGA) is not only characterized by multi-resolution, time-frequency localization, multidirectionality and anisotropy, but also outdoes the limitations of wavelet transform in representing high-dimensional singular data such as edges and contours. Therefore, researchers have been exploring new MGA-based image compression standards rather than the JPEG2000 standard. However, due to the difference in terms of the data structure, redundancy and decorrelation between wavelet and MGA, as well as the complexity of the coding scheme, so far, no definitive researches have been reported on the MGA-based image coding schemes. In addressing this problem, this paper proposes an image data compression approach using the hidden Markov model (HMM)/pulse-coupled neural network (PCNN) model in the contourlet domain. First, a sparse decomposition of an image was performed using a contourlet transform to obtain the coefficients that show the multiscale and multidirectional characteristics. An HMM was then adopted to establish links between coefficients in neighboring subbands of different levels and directions. An Expectation-Maximization (EM) algorithm was also adopted in training the HMM in order to estimate the state probability matrix, which maintains the same structure of the contourlet decomposition coefficients. In addition, each state probability can be classified by the PCNN based on the state probability distribution. Experimental results show that the HMM/PCNN -contourlet model proposed in this paper leads to better compression performance and offer a more flexible encoding scheme.


Assuntos
Compressão de Dados/métodos , Redes Neurais de Computação , Análise de Ondaletas , Cadeias de Markov
18.
Ultrasonics ; 108: 106214, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32736163

RESUMO

In this work, a compressed sensing method to reduce hardware complexity of ultrasound imaging systems is proposed and experimentally verified. We provide clinical evaluation of the method with a possible high compression rates (up to 64 RF signals compressed into a single channel on receive) which uses elastic net estimation for decoding stage. This allows a reduction in size and power consumption of the front-end electronics with only a minor loss in image quality. We demonstrate an 8-fold receive channel count reduction with a 3.16 dB and 3.64 dB mean absolute error for gallbladder and kidney images, respectively, as well as 7.4% increase in the contrast-to-noise ratio for kidney images and 0.1% loss in the contrast-to noise ratio for gallbladder images, on average. The proposed method may enable a fully portable ultrasonic device with virtually no loss in image quality as compared to a full size clinical scanner to be constructed.


Assuntos
Compressão de Dados/métodos , Ultrassonografia/métodos , Algoritmos , Vesícula Biliar/diagnóstico por imagem , Voluntários Saudáveis , Humanos , Processamento de Imagem Assistida por Computador/métodos , Rim/diagnóstico por imagem , Fígado/diagnóstico por imagem , Processamento de Sinais Assistido por Computador , Razão Sinal-Ruído , Ultrassonografia/instrumentação
19.
BMC Bioinformatics ; 21(1): 321, 2020 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-32689929

RESUMO

BACKGROUND: Recent advancements in high-throughput sequencing technologies have generated an unprecedented amount of genomic data that must be stored, processed, and transmitted over the network for sharing. Lossy genomic data compression, especially of the base quality values of sequencing data, is emerging as an efficient way to handle this challenge due to its superior compression performance compared to lossless compression methods. Many lossy compression algorithms have been developed for and evaluated using DNA sequencing data. However, whether these algorithms can be used on RNA sequencing (RNA-seq) data remains unclear. RESULTS: In this study, we evaluated the impacts of lossy quality value compression on common RNA-seq data analysis pipelines including expression quantification, transcriptome assembly, and short variants detection using RNA-seq data from different species and sequencing platforms. Our study shows that lossy quality value compression could effectively improve RNA-seq data compression. In some cases, lossy algorithms achieved up to 1.2-3 times further reduction on the overall RNA-seq data size compared to existing lossless algorithms. However, lossy quality value compression could affect the results of some RNA-seq data processing pipelines, and hence its impacts to RNA-seq studies cannot be ignored in some cases. Pipelines using HISAT2 for alignment were most significantly affected by lossy quality value compression, while the effects of lossy compression on pipelines that do not depend on quality values, e.g., STAR-based expression quantification and transcriptome assembly pipelines, were not observed. Moreover, regardless of using either STAR or HISAT2 as the aligner, variant detection results were affected by lossy quality value compression, albeit to a lesser extent when STAR-based pipeline was used. Our results also show that the impacts of lossy quality value compression depend on the compression algorithms being used and the compression levels if the algorithm supports setting of multiple compression levels. CONCLUSIONS: Lossy quality value compression can be incorporated into existing RNA-seq analysis pipelines to alleviate the data storage and transmission burdens. However, care should be taken on the selection of compression tools and levels based on the requirements of the downstream analysis pipelines to avoid introducing undesirable adverse effects on the analysis results.


Assuntos
Algoritmos , Compressão de Dados/métodos , Compressão de Dados/normas , Genômica/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Análise de Sequência de RNA/métodos , Sequência de Bases , Perfilação da Expressão Gênica , Genoma Humano , Humanos
20.
PLoS One ; 15(5): e0232942, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32453750

RESUMO

The recent decrease in cost and time to sequence and assemble of complete genomes created an increased demand for data storage. As a consequence, several strategies for assembled biological data compression were created. Vertical compression tools implement strategies that take advantage of the high level of similarity between multiple assembled genomic sequences for better compression results. However, current reviews on vertical compression do not compare the execution flow of each tool, which is constituted by phases of preprocessing, transformation, and data encoding. We performed a systematic literature review to identify and compare existing tools for vertical compression of assembled genomic sequences. The review was centered on PubMed and Scopus, in which 45726 distinct papers were considered. Next, 32 papers were selected according to the following criteria: to present a lossless vertical compression tool; to use the information contained in other sequences for the compression; to be able to manipulate genomic sequences in FASTA format; and no need prior knowledge. Although we extracted performance compression results, they were not compared as the tools did not use a standardized evaluation protocol. Thus, we conclude that there's a lack of definition of an evaluation protocol that must be applied by each tool.


Assuntos
Compressão de Dados/métodos , Armazenamento e Recuperação da Informação/métodos , Análise de Sequência de DNA/métodos , Algoritmos , Genoma , Genômica/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Humanos , Publicações , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...