RESUMO
Cytomorphic engineering attempts to study the cellular behavior of biological systems using electronics. As such, it can be considered analogous to the study of neurobiological concepts for neuromorphic engineering applications. To date, digital and analog translinear electronics have commonly been used in the design of cytomorphic circuits; Such circuits could greatly benefit from lowering the area of the digital memory via memristive circuits. In this article, we propose a novel approach that utilizes the Boltzmann-exponential stochastic transport of ionic species through insulators to naturally model the nonlinear and stochastic behavior of biochemical reactions. We first show that two-terminal memristive devices can capture the non-linear and stochastic behavior of biochemical reactions. Then, we present the design of several building blocks based on analog memristive circuits that inherently model the biophysical mechanisms of gene expression. The circuits model induction by small molecules, activation and repression by transcription factors, biological promoters, cooperative binding, and transcriptional and translational regulation of gene expression. Finally, we utilize the building blocks to form complex mixed-signal networks that can simulate the delay-induced oscillator and the p53-mdm2 interaction in the cancer signaling pathway. Our approach can provide a fast and simple emulative framework for studying genetic circuits and arbitrary large-scale biological networks in systems and synthetic biology. Some challenges may be that memristive devices with frequent learning and programming do not have the same longevity as traditional transistor-based electron-transport devices, and operate with significantly slower time constants, which can limit emulation speed.
Assuntos
Biomimética/instrumentação , Transistores Eletrônicos , Desenho de Equipamento , Biologia Molecular/instrumentação , Biologia Sintética/instrumentaçãoRESUMO
Learning in multilayer neural networks (MNNs) relies on continuous updating of large matrices of synaptic weights by local rules. Such locality can be exploited for massive parallelism when implementing MNNs in hardware. However, these update rules require a multiply and accumulate operation for each synaptic weight, which is challenging to implement compactly using CMOS. In this paper, a method for performing these update operations simultaneously (incremental outer products) using memristor-based arrays is proposed. The method is based on the fact that, approximately, given a voltage pulse, the conductivity of a memristor will increment proportionally to the pulse duration multiplied by the pulse magnitude if the increment is sufficiently small. The proposed method uses a synaptic circuit composed of a small number of components per synapse: one memristor and two CMOS transistors. This circuit is expected to consume between 2% and 8% of the area and static power of previous CMOS-only hardware alternatives. Such a circuit can compactly implement hardware MNNs trainable by scalable algorithms based on online gradient descent (e.g., backpropagation). The utility and robustness of the proposed memristor-based circuit are demonstrated on standard supervised learning tasks.