Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
IEEE Trans Pattern Anal Mach Intell ; 45(9): 11152-11168, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37074898

RESUMEN

Inference at-the-edge using embedded machine learning models is associated with challenging trade-offs between resource metrics, such as energy and memory footprint, and the performance metrics, such as computation time and accuracy. In this work, we go beyond the conventional Neural Network based approaches to explore Tsetlin Machine (TM), an emerging machine learning algorithm, that uses learning automata to create propositional logic for classification. We use algorithm-hardware co-design to propose a novel methodology for training and inference of TM. The methodology, called REDRESS, comprises independent TM training and inference techniques to reduce the memory footprint of the resulting automata to target low and ultra-low power applications. The array of Tsetlin Automata (TA) holds learned information in the binary form as bits: {0,1}, called excludes and includes, respectively. REDRESS proposes a lossless TA compression method, called the include-encoding, that stores only the information associated with includes to achieve over 99% compression. This is enabled by a novel computationally minimal training procedure, called the Tsetlin Automata Re-profiling, to improve the accuracy and increase the sparsity of TA to reduce the number of includes, hence, the memory footprint. Finally, REDRESS includes an inherently bit-parallel inference algorithm that operates on the optimally trained TA in the compressed domain, that does not require decompression during runtime, to obtain high speedups when compared with the state-of-the-art Binary Neural Network (BNN) models. In this work, we demonstrate that using REDRESS approach, TM outperforms BNN models on all design metrics for five benchmark datasets viz. MNIST, CIFAR2, KWS6, Fashion-MNIST and Kuzushiji-MNIST. When implemented on an STM32F746G-DISCO microcontroller, REDRESS obtained speedups and energy savings ranging 5-5700× compared with different BNN models.

2.
Artículo en Inglés | MEDLINE | ID: mdl-36086088

RESUMEN

Myers bit-vector algorithm for approximate string matching (ASM) is a dynamic programming based approach that takes advantage of bit-parallel operations. It is one of the fastest algorithms to find the edit distance between two strings. In computational biology, ASM is used at various stages of the computational pipeline, including proteomics and genomics. The computationally intensive nature of the underlying algorithms for ASM operating on the large volume of data necessitates the acceleration of these algorithms. In this paper, we propose a novel ASM architecture based on Myers bit-vector algorithm for parallel searching of multiple query patterns in the biological databases. The proposed parallel architecture uses multiple processing engines and hardware/software codesign for an accelerated and energy-efficient design of ASM algorithm on hardware. In comparison with related literature, the proposed design achieves 22× better performance with a demonstrative energy efficiency of  âˆ¼ 500×109 cell updates per joule.


Asunto(s)
Biología Computacional , Conservación de los Recursos Energéticos , Algoritmos , Computadores , Programas Informáticos
3.
IEEE/ACM Trans Comput Biol Bioinform ; 19(5): 2697-2711, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34415836

RESUMEN

In the assembly pipeline of Whole Genome Sequencing (WGS), read mapping is a widely used method to re-assemble the genome. It employs approximate string matching and dynamic programming-based algorithms on a large volume of data and associated structures, making it a computationally intensive process. Currently, the state-of-the-art data centers for genome sequencing incur substantial setup and energy costs for maintaining hardware, data storage and cooling systems. To enable low-cost genomics, we propose an energy-efficient architectural methodology for read mapping using a single system-on-chip (SoC) platform. The proposed methodology is based on the q-gram lemma and designed using a novel architecture for filtering and verification. The filtering algorithm is designed using a parallel sorted q-gram lemma based method for the first time, and it is complemented by an in-situ verification routine using parallel Myers bit-vector algorithm. We have implemented our design on the Zynq Ultrascale+ XCZU9EG MPSoC platform. It is then extensively validated using real genomic data to demonstrate up to 7.8× energy reduction and up to 13.3× less resource utilization when compared with the state-of-the-art software and hardware approaches.


Asunto(s)
Algoritmos , Programas Informáticos , Genoma , Genómica , Análisis de Secuencia de ADN/métodos
4.
IEEE/ACM Trans Comput Biol Bioinform ; 18(4): 1426-1438, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-31562102

RESUMEN

Genomics has the potential to transform medicine from reactive to a personalized, predictive, preventive, and participatory (P4) form. Being a Big Data application with continuously increasing rate of data production, the computational costs of genomics have become a daunting challenge. Most modern computing systems are heterogeneous consisting of various combinations of computing resources, such as CPUs, GPUs, and FPGAs. They require platform-specific software and languages to program making their simultaneous operation challenging. Existing read mappers and analysis tools in the whole genome sequencing (WGS) pipeline do not scale for such heterogeneity. Additionally, the computational cost of mapping reads is high due to expensive dynamic programming based verification, where optimized implementations are already available. Thus, improvement in filtration techniques is needed to reduce verification overhead. To address the aforementioned limitations with regards to the mapping element of the WGS pipeline, we propose a Cross-platfOrm Read mApper using opencL (CORAL). CORAL is capable of executing on heterogeneous devices/platforms, simultaneously. It can reduce computational time by suitably distributing the workload without any additional programming effort. We showcase this on a quadcore Intel CPU along with two Nvidia GTX 590 GPUs, distributing the workload judiciously to achieve up to 2× speedup compared to when, only, the CPUs are used. To reduce the verification overhead, CORAL dynamically adapts k-mer length during filtration. We demonstrate competitive timings in comparison with other mappers using real and simulated reads. CORAL is available at: https://github.com/nclaes/CORAL.


Asunto(s)
Mapeo Cromosómico/métodos , Genómica/métodos , Secuenciación Completa del Genoma/métodos , Algoritmos , Humanos , Alineación de Secuencia
5.
J Electrocardiol ; 49(2): 231-42, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26806119

RESUMEN

BACKGROUND: Vectorcardiogram (VCG) has been repeatedly found useful for clinical investigations. It may not substitute but complement Standard 12-Lead (S12) ECG. There was tremendous research between 1950s to mid-1980s on VCG in general and Frank's System in particular, however, in last three decades it has been dropped as a routine cardiac test, the major reasons being unconventional electrode placements which required training of the physicians, greater number of electrodes involved when used to supplement S12 system and additional hardware complexity involved, at least in the early days. Although it lost the interest of cardiologists, the engineering community has adopted the VCG as a tool for interdisciplinary research. We envisage that, if accurate Frank's VCG system is made available avoiding the aforementioned limitations, VCG will complement S12 system in diagnosis of cardiovascular diseases (CVDs). METHODS AND RESULTS: In this paper, we propose a methodology to construct Frank VCG from S12 system using Principal Component Analysis (PCA). We have compared our work with state-of-the-art Inverse Dower Transform (IDT) and Kors Transform (KT). Mean R(2) statistics and correlation coefficient values, obtained upon comparison of reconstructed and originally measured Frank's leads, for CSE multilead (CSEDB) and PhysioNet's PTBDB databases using our proposed method, IDT and KT were found to be (73.7%,0.869), (57.6%,0.788) and (56.2%,0.781) respectively. From remote healthcare perspective, a reduced 2-3 lead system is desired and Frank lead system seems to be promising as shown by previous works. However, cardiologists are accustomed to S12 system due to its widespread usage and derived Frank lead system might not be sufficient. Hence, to bridge the gap, we have presented the results of personalized reconstruction of S12 system from derived VCG, obtained using proposed PCA-based method and compared it with results obtained when originally measured Frank leads were used. CONCLUSIONS: The proposed methodology, without any modification in the current acquisition system, can be used to obtain Frank VCG from S12 system to complement it in CVD diagnosis. Omnipresent computerized machines can readily apply the proposed methodology and thus, can find widespread clinical application.


Asunto(s)
Algoritmos , Interpretación Estadística de Datos , Diagnóstico por Computador/métodos , Electrocardiografía/métodos , Vectorcardiografía/métodos , Humanos , Análisis de Componente Principal , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
6.
Artículo en Inglés | MEDLINE | ID: mdl-24110556

RESUMEN

Fragmented QRS (f-QRS) has been found to have higher sensitivity and/or specificity values for several diseases including remote and acute myocardial infarction, cardiac sarcoidosis etc, compared to other conventional bio-markers viz. Q-wave, ST-elevation etc. Several of these diseases do not have a reliable bio-marker and hence, patients suffering from them have to undergo expensive and sometimes invasive tests for diagnosis viz. myocardial biopsy, cardiac catheterization etc. This paper proposes automation of fragmentation detection which will lead to a more reliable diagnosis and therapy reducing human error, time consumption and thereby alleviating the need of enormous training required for detection of fragmentation. In this paper, we propose a novel approach to detect the discontinuities present in QRS complex of standard 12-lead ECG, known as fragmented QRS, using Discrete Wavelet transform (DWT) targeting both hospital-based and remote health monitoring environments. Fragmentation Detection Algorithm (FDA) was designed and modeled using PhysioNet's PTBDB and upon reiterative refinements it successfully detected all discontinuities in the QRS complex. The QRS complexes of 31 patients obtained randomly from PhysioNet's PTBDB were examined by two experienced cardiologists and the measurements obtained were compared with the results of our proposed FDA leading to 89.8% agreement among them.


Asunto(s)
Arritmias Cardíacas/diagnóstico , Electrocardiografía/métodos , Algoritmos , Diagnóstico por Computador , Humanos , Contracción Miocárdica , Sensibilidad y Especificidad , Análisis de Ondículas
7.
J R Soc Interface ; 10(89): 20130761, 2013 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-24132202

RESUMEN

Fragmented QRS (f-QRS) has been proven to be an efficient biomarker for several diseases, including remote and acute myocardial infarction, cardiac sarcoidosis, non-ischaemic cardiomyopathy, etc. It has also been shown to have higher sensitivity and/or specificity values than the conventional markers (e.g. Q-wave, ST-elevation, etc.) which may even regress or disappear with time. Patients with such diseases have to undergo expensive and sometimes invasive tests for diagnosis. Automated detection of f-QRS followed by identification of its various morphologies in addition to the conventional ECG feature (e.g. P, QRS, T amplitude and duration, etc.) extraction will lead to a more reliable diagnosis, therapy and disease prognosis than the state-of-the-art approaches and thereby will be of significant clinical importance for both hospital-based and emerging remote health monitoring environments as well as for implanted ICD devices. An automated algorithm for detection of f-QRS from the ECG and identification of its various morphologies is proposed in this work which, to the best of our knowledge, is the first work of its kind. Using our recently proposed time-domain morphology and gradient-based ECG feature extraction algorithm, the QRS complex is extracted and discrete wavelet transform (DWT) with one level of decomposition, using the 'Haar' wavelet, is applied on it to detect the presence of fragmentation. Detailed DWT coefficients were observed to hypothesize the postulates of detection of all types of morphologies as reported in the literature. To model and verify the algorithm, PhysioNet's PTB database was used. Forty patients were randomly selected from the database and their ECG were examined by two experienced cardiologists and the results were compared with those obtained from the algorithm. Out of 40 patients, 31 were considered appropriate for comparison by two cardiologists, and it is shown that 334 out of 372 (89.8%) leads from the chosen 31 patients complied favourably with our proposed algorithm. The sensitivity and specificity values obtained for the detection of f-QRS were 0.897 and 0.899, respectively. Automation will speed up the detection of fragmentation, reducing the human error involved and will allow it to be implemented for hospital-based remote monitoring and ICD devices.


Asunto(s)
Enfermedades Cardiovasculares/diagnóstico , Electrocardiografía/métodos , Análisis de Ondículas , Algoritmos , Biomarcadores , Ingeniería Biomédica , Enfermedades Cardiovasculares/fisiopatología , Humanos , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA