Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Nat Commun ; 15(1): 1974, 2024 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-38438350

RESUMEN

Artificial Intelligence (AI) is currently experiencing a bloom driven by deep learning (DL) techniques, which rely on networks of connected simple computing units operating in parallel. The low communication bandwidth between memory and processing units in conventional von Neumann machines does not support the requirements of emerging applications that rely extensively on large sets of data. More recent computing paradigms, such as high parallelization and near-memory computing, help alleviate the data communication bottleneck to some extent, but paradigm- shifting concepts are required. Memristors, a novel beyond-complementary metal-oxide-semiconductor (CMOS) technology, are a promising choice for memory devices due to their unique intrinsic device-level properties, enabling both storing and computing with a small, massively-parallel footprint at low power. Theoretically, this directly translates to a major boost in energy efficiency and computational throughput, but various practical challenges remain. In this work we review the latest efforts for achieving hardware-based memristive artificial neural networks (ANNs), describing with detail the working principia of each block and the different design alternatives with their own advantages and disadvantages, as well as the tools required for accurate estimation of performance metrics. Ultimately, we aim to provide a comprehensive protocol of the materials and methods involved in memristive neural networks to those aiming to start working in this field and the experts looking for a holistic approach.

2.
Front Comput Neurosci ; 15: 705050, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34650420

RESUMEN

The human brain can be considered as a complex dynamic and recurrent neural network. There are several models for neural networks of the human brain, that cover sensory to cortical information processing. Large majority models include feedback mechanisms that are hard to formalise to realistic applications. Recurrent neural networks and Long short-term memory (LSTM) inspire from the neuronal feedback networks. Long short-term memory (LSTM) prevent vanishing and exploding gradients problems faced by simple recurrent neural networks and has the ability to process order-dependent data. Such recurrent neural units can be replicated in hardware and interfaced with analog sensors for efficient and miniaturised implementation of intelligent processing. Implementation of analog memristive LSTM hardware is an open research problem and can offer the advantages of continuous domain analog computing with relatively low on-chip area compared with a digital-only implementation. Designed for solving time-series prediction problems, overall architectures and circuits were tested with TSMC 0.18 µm CMOS technology and hafnium-oxide (HfO 2) based memristor crossbars. Extensive circuit based SPICE simulations with over 3,500 (inference only) and 300 system-level simulations (training and inference) were performed for benchmarking the system performance of the proposed implementations. The analysis includes Monte Carlo simulations for the variability of memristors' conductance, and crossbar parasitic, where non-idealities of hybrid CMOS-memristor circuits are taken into the account.

3.
IEEE Trans Biomed Circuits Syst ; 14(2): 164-172, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-31794405

RESUMEN

Hierarchical, modular and sparse information processing are signature characteristics of biological neural networks. These aspects have been the backbone of several artificial neural network designs of the brain-like networks, including Hierarchical Temporal Memory (HTM). The main contribution of this work is showing that Convolutional Neural Network (CNN) in combination with Long short term memory (LSTM) can be a good alternative for implementing the hierarchy, modularity and sparsity of information processing. To demonstrate this, we draw a comparison of CNN-LSTM and HTM performance on a face recognition problem with a small training set. We also present the analog CMOS-memristor circuit blocks required to implement such a scheme. The presented memristive implementations of the CNN-LSTM architecture are easier to i mplement, train and offer higher recognition performance than the HTM. The study also includes memristor variability and failure analysis.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Semiconductores , Reconocimiento Facial Automatizado , Bases de Datos Factuales , Diseño de Equipo , Humanos , Modelos Neurológicos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...