Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Biomed Circuits Syst ; 18(3): 523-538, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38157470

RESUMO

In this article, we introduce GEMA, a genome exact mapping accelerator based on learned indexes, specifically designed for FPGA implementation. GEMA utilizes a machine learning (ML) algorithm to precisely locate the exact position of read sequences within the original sequence. To enhance the accuracy of the trained ML model, we incorporate data augmentation and data-distribution-aware partitioning techniques. Additionally, we present an efficient yet low-overhead error recovery technique. To map long reads more efficiently, we propose a speculative prefetching approach, which reduces the required memory bandwidth. Furthermore, we suggest an FPGA-based architecture for implementing the proposed mapping accelerator, optimizing the accesses to off-chip memory. Our studies demonstrate that GEMA achieves up to 1.36 × higher speed for short reads compared to the corresponding results reported in recently published exact mapping accelerators. Moreover, GEMA achieves up to ∼22 × faster mapping of long reads compared to the available results for the longest mapped reads using these accelerators.


Assuntos
Algoritmos , Aprendizado de Máquina , Humanos , Análise de Sequência de DNA/métodos , Análise de Sequência de DNA/instrumentação , Mapeamento Cromossômico/métodos , Mapeamento Cromossômico/instrumentação
2.
IEEE Trans Neural Netw Learn Syst ; 34(11): 8284-8296, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35188894

RESUMO

In this work, to limit the number of required attention inference hops in memory-augmented neural networks, we propose an online adaptive approach called [Formula: see text]-memory-augmented neural network (MANN). By exploiting a small neural network classifier, an adequate number of attention inference hops for the input query are determined. The technique results in the elimination of a large number of unnecessary computations in extracting the correct answer. In addition, to further lower computations in [Formula: see text]-MANN, we suggest pruning weights of the final fully connected (FC) layers. To this end, two pruning approaches, one with negligible accuracy loss and the other with controllable loss on the final accuracy, are developed. The efficacy of the technique is assessed by applying it to two different MANN structures and two question answering (QA) datasets. The analytical assessment reveals, for the two benchmarks, on average, 50% fewer computations compared to the corresponding baseline MANNs at the cost of less than 1% accuracy loss. In addition, when used along with the previously published zero-skipping technique, a computation count reduction of approximately 70% is achieved. Finally, when the proposed approach (without zero skipping) is implemented on the CPU and GPU platforms, on average, a runtime reduction of 43% is achieved.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...