Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Neuroimage ; 223: 117282, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32828921

RESUMEN

Hearing-impaired people often struggle to follow the speech stream of an individual talker in noisy environments. Recent studies show that the brain tracks attended speech and that the attended talker can be decoded from neural data on a single-trial level. This raises the possibility of "neuro-steered" hearing devices in which the brain-decoded intention of a hearing-impaired listener is used to enhance the voice of the attended speaker from a speech separation front-end. So far, methods that use this paradigm have focused on optimizing the brain decoding and the acoustic speech separation independently. In this work, we propose a novel framework called brain-informed speech separation (BISS)1 in which the information about the attended speech, as decoded from the subject's brain, is directly used to perform speech separation in the front-end. We present a deep learning model that uses neural data to extract the clean audio signal that a listener is attending to from a multi-talker speech mixture. We show that the framework can be applied successfully to the decoded output from either invasive intracranial electroencephalography (iEEG) or non-invasive electroencephalography (EEG) recordings from hearing-impaired subjects. It also results in improved speech separation, even in scenes with background noise. The generalization capability of the system renders it a perfect candidate for neuro-steered hearing-assistive devices.


Asunto(s)
Encéfalo/fisiología , Electroencefalografía , Procesamiento de Señales Asistido por Computador , Acústica del Lenguaje , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Algoritmos , Aprendizaje Profundo , Pérdida Auditiva/fisiopatología , Humanos , Persona de Mediana Edad
2.
Neural Comput ; 32(1): 261-279, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31703173

RESUMEN

It is well known in machine learning that models trained on a training set generated by a probability distribution function perform far worse on test sets generated by a different probability distribution function. In the limit, it is feasible that a continuum of probability distribution functions might have generated the observed test set data; a desirable property of a learned model in that case is its ability to describe most of the probability distribution functions from the continuum equally well. This requirement naturally leads to sampling methods from the continuum of probability distribution functions that lead to the construction of optimal training sets. We study the sequential prediction of Ornstein-Uhlenbeck processes that form a parametric family. We find empirically that a simple deep network trained on optimally constructed training sets using the methods described in this letter can be robust to changes in the test set distribution.

3.
Neural Comput ; 27(4): 845-97, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25734494

RESUMEN

This letter presents a spike-based model that employs neurons with functionally distinct dendritic compartments for classifying high-dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron the capacity to perform a large number of input-output mappings. The model uses sparse synaptic connectivity, where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin-enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems, and its performance is compared against that achieved using support vector machine and extreme learning machine techniques. Our proposed method attains comparable performance while using 10% to 50% less in computational resource than the other reported techniques.


Asunto(s)
Potenciales de Acción/fisiología , Dendritas/fisiología , Modelos Neurológicos , Neuronas/citología , Máquina de Vectores de Soporte , Sinapsis/fisiología , Algoritmos , Animales , Dinámicas no Lineales
4.
Neural Comput ; 27(10): 2231-59, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26313599

RESUMEN

This letter addresses the problem of separating two speakers from a single microphone recording. Three linear methods are tested for source separation, all of which operate directly on sound spectrograms: (1) eigenmode analysis of covariance difference to identify spectro-temporal features associated with large variance for one source and small variance for the other source; (2) maximum likelihood demixing in which the mixture is modeled as the sum of two gaussian signals and maximum likelihood is used to identify the most likely sources; and (3) suppression-regression, in which autoregressive models are trained to reproduce one source and suppress the other. These linear approaches are tested on the problem of separating a known male from a known female speaker. The performance of these algorithms is assessed in terms of the residual error of estimated source spectrograms, waveform signal-to-noise ratio, and perceptual evaluation of speech quality scores. This work shows that the algorithms compare favorably to nonlinear approaches such as nonnegative sparse coding in terms of simplicity, performance, and suitability for real-time implementations, and they provide benchmark solutions for monaural source separation tasks.

5.
Proc Natl Acad Sci U S A ; 107(10): 4722-7, 2010 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-20167805

RESUMEN

It is widely believed that sensory and motor processing in the brain is based on simple computational primitives rooted in cellular and synaptic physiology. However, many gaps remain in our understanding of the connections between neural computations and biophysical properties of neurons. Here, we show that synaptic spike-time-dependent plasticity (STDP) combined with spike-frequency adaptation (SFA) in a single neuron together approximate the well-known perceptron learning rule. Our calculations and integrate-and-fire simulations reveal that delayed inputs to a neuron endowed with STDP and SFA precisely instruct neural responses to earlier arriving inputs. We demonstrate this mechanism on a developmental example of auditory map formation guided by visual inputs, as observed in the external nucleus of the inferior colliculus (ICX) of barn owls. The interplay of SFA and STDP in model ICX neurons precisely transfers the tuning curve from the visual modality onto the auditory modality, demonstrating a useful computation for multimodal and sensory-guided processing.


Asunto(s)
Colículos Inferiores/fisiología , Aprendizaje/fisiología , Redes Neurales de la Computación , Plasticidad Neuronal/fisiología , Estrigiformes/fisiología , Potenciales de Acción/fisiología , Adaptación Fisiológica/fisiología , Algoritmos , Animales , Percepción Auditiva/fisiología , Mapeo Encefálico , Colículos Inferiores/citología , Potenciales de la Membrana/fisiología , Mesencéfalo/citología , Mesencéfalo/fisiología , Modelos Neurológicos , Neuronas/citología , Neuronas/fisiología , Percepción Visual/fisiología
6.
IEEE Trans Biomed Circuits Syst ; 17(4): 808-817, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37318976

RESUMEN

Sweat secreted by the human eccrine sweat glands can provide valuable biomarker information during exercise. Real-time non-invasive biomarker recordings are therefore useful for evaluating the physiological conditions of an athlete such as their hydration status during endurance exercise. This work describes a wearable sweat biomonitoring patch incorporating printed electrochemical sensors into a plastic microfluidic sweat collector and data analysis that shows the real-time recorded sweat biomarkers can be used to predict a physiological biomarker. The system was placed on subjects carrying out an hour-long exercise session and results were compared to a wearable system using potentiometric robust silicon-based sensors and to commercially available HORIBA-LAQUAtwin devices. Both prototypes were applied to the real-time monitoring of sweat during cycling sessions and showed stable readings for around an hour. Analysis of the sweat biomarkers collected from the printed patch prototype shows that their real-time measurements correlate well (correlation coefficient ≥ 0.65) with other physiological biomarkers such as heart rate and regional sweat rate collected in the same session. We show for the first time, that the real-time sweat sodium and potassium concentration biomarker measurements from the printed sensors can be used to predict the core body temperature with root mean square error (RMSE) of 0.02 °C which is 71% lower compared to the use of only the physiological biomarkers. These results show that these wearable patch technologies are promising for real-time portable sweat monitoring analytical platforms, especially for athletes performing endurance exercise.


Asunto(s)
Técnicas Biosensibles , Dispositivos Electrónicos Vestibles , Humanos , Sudor/química , Temperatura Corporal , Electrólitos , Biomarcadores/análisis
7.
Artículo en Inglés | MEDLINE | ID: mdl-35687629

RESUMEN

Long short-term memory (LSTM) recurrent networks are frequently used for tasks involving time-sequential data, such as speech recognition. Unlike previous LSTM accelerators that either exploit spatial weight sparsity or temporal activation sparsity, this article proposes a new accelerator called "Spartus" that exploits spatio-temporal sparsity to achieve ultralow latency inference. Spatial sparsity is induced using a new column-balanced targeted dropout (CBTD) structured pruning method, producing structured sparse weight matrices for a balanced workload. The pruned networks running on Spartus hardware achieve weight sparsity levels of up to 96% and 94% with negligible accuracy loss on the TIMIT and the Librispeech datasets. To induce temporal sparsity in LSTM, we extend the previous DeltaGRU method to the DeltaLSTM method. Combining spatio-temporal sparsity with CBTD and DeltaLSTM saves on weight memory access and associated arithmetic operations. The Spartus architecture is scalable and supports real-time online speech recognition when implemented on small and large FPGAs. Spartus per-sample latency for a single DeltaLSTM layer of 1024 neurons averages 1 µ s. Exploiting spatio-temporal sparsity on our test LSTM network using the TIMIT dataset leads to 46 × speedup of Spartus over its theoretical hardware performance to achieve 9.4-TOp/s effective batch-1 throughput and 1.1-TOp/s/W power efficiency.

8.
IEEE J Biomed Health Inform ; 26(9): 4725-4732, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35749337

RESUMEN

Improper hydration routines can reduce athletic performance. Recent studies show that data from noninvasive biomarker recordings can help to evaluate the hydration status of subjects during endurance exercise. These studies are usually carried out on multiple subjects. In this work, we present the first study on predicting hydration status using machine learning models from single-subject experiments, which involve 32 exercise sessions of constant moderate intensity performed with and without fluid intake. During exercise, we measured four noninvasive physiological and sweat biomarkers including heart rate, core temperature, sweat sodium concentration, and whole-body sweat rate. Sweat sodium concentration was measured from six body regions using absorbent patches. We used three machine learning models to determine the percentage of body weight loss as an indicator of dehydration with these biomarkers and compared the prediction accuracy. The results on this single subject show that these models gave similar mean absolute errors, while in general the nonlinear models slightly outperformed the linear model in most of the experiments. The prediction accuracy of using the whole-body sweat rate or heart rate was higher than using core temperature or sweat sodium concentration. In addition, the model trained on the sweat sodium concentration collected from the arms gave slightly better accuracy than from the other five body regions. This exploratory work paves the way for the use of these machine learning models to develop personalized health monitoring together with emerging, noninvasive wearable sensor devices.


Asunto(s)
Sudor , Sudoración , Biomarcadores , Humanos , Aprendizaje Automático , Sodio
9.
Front Neurosci ; 15: 771480, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34955722

RESUMEN

Liquid analysis is key to track conformity with the strict process quality standards of sectors like food, beverage, and chemical manufacturing. In order to analyse product qualities online and at the very point of interest, automated monitoring systems must satisfy strong requirements in terms of miniaturization, energy autonomy, and real time operation. Toward this goal, we present the first implementation of artificial taste running on neuromorphic hardware for continuous edge monitoring applications. We used a solid-state electrochemical microsensor array to acquire multivariate, time-varying chemical measurements, employed temporal filtering to enhance sensor readout dynamics, and deployed a rate-based, deep convolutional spiking neural network to efficiently fuse the electrochemical sensor data. To evaluate performance we created MicroBeTa (Microsensor Beverage Tasting), a new dataset for beverage classification incorporating 7 h of temporal recordings performed over 3 days, including sensor drifts and sensor replacements. Our implementation of artificial taste is 15× more energy efficient on inference tasks than similar convolutional architectures running on other commercial, low power edge-AI inference devices, achieving over 178× lower latencies than the sampling period of the sensor readout, and high accuracy (97%) on a single Intel Loihi neuromorphic research processor included in a USB stick form factor.

10.
Neural Comput ; 22(8): 2086-112, 2010 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-20337538

RESUMEN

With the advent of new experimental evidence showing that dendrites play an active role in processing a neuron's inputs, we revisit the question of a suitable abstraction for the computing function of a neuron in processing spatiotemporal input patterns. Although the integrative role of a neuron in relation to the spatial clustering of synaptic inputs can be described by a two-layer neural network, no corresponding abstraction has yet been described for how a neuron processes temporal input patterns on the dendrites. We address this void using a real-time aVLSI (analog very-large-scale-integrated) dendritic compartmental model, which incorporates two widely studied classes of regenerative event mechanisms: one is mediated by voltage-gated ion channels and the other by transmitter-gated NMDA channels. From this model, we find that the response of a dendritic compartment can be described as a nonlinear sigmoidal function of both the degree of input temporal synchrony and the synaptic input spatial clustering. We propose that a neuron with active dendrites can be modeled as a multilayer network that selectively amplifies responses to relevant spatiotemporal input spike patterns.


Asunto(s)
Dendritas/fisiología , Modelos Neurológicos , Neuronas/fisiología , Activación del Canal Iónico/fisiología , N-Metilaspartato/fisiología , Conducción Nerviosa/fisiología
11.
IEEE Trans Neural Netw Learn Syst ; 30(3): 644-656, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-30047912

RESUMEN

Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though graphical processing units are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp/s/W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from 1×1 to 7×7 . NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq field-programmable gate array (FPGA) platform and presented the results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Postsynthesis simulations using Mentor Modelsim in a 28-nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp/s. By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the multiply-accumulate units, and achieves a power efficiency of over 3 TOp/s/W in a core area of 6.3 mm2. As further proof of NullHop's usability, we interfaced its FPGA implementation with a neuromorphic event camera for real-time interactive demonstrations.

12.
Front Neurosci ; 12: 23, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29479300

RESUMEN

Event-driven neuromorphic spiking sensors such as the silicon retina and the silicon cochlea encode the external sensory stimuli as asynchronous streams of spikes across different channels or pixels. Combining state-of-art deep neural networks with the asynchronous outputs of these sensors has produced encouraging results on some datasets but remains challenging. While the lack of effective spiking networks to process the spike streams is one reason, the other reason is that the pre-processing methods required to convert the spike streams to frame-based features needed for the deep networks still require further investigation. This work investigates the effectiveness of synchronous and asynchronous frame-based features generated using spike count and constant event binning in combination with the use of a recurrent neural network for solving a classification task using N-TIDIGITS18 dataset. This spike-based dataset consists of recordings from the Dynamic Audio Sensor, a spiking silicon cochlea sensor, in response to the TIDIGITS audio dataset. We also propose a new pre-processing method which applies an exponential kernel on the output cochlea spikes so that the interspike timing information is better preserved. The results from the N-TIDIGITS18 dataset show that the exponential features perform better than the spike count features, with over 91% accuracy on the digit classification task. This accuracy corresponds to an improvement of at least 2.5% over the use of spike count features, establishing a new state of the art for this dataset.

13.
Front Neurosci ; 12: 160, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29643760

RESUMEN

This paper presents a real-time, low-complexity neuromorphic speech recognition system using a spiking silicon cochlea, a feature extraction module and a population encoding method based Neural Engineering Framework (NEF)/Extreme Learning Machine (ELM) classifier IC. Several feature extraction methods with varying memory and computational complexity are presented along with their corresponding classification accuracies. On the N-TIDIGITS18 dataset, we show that a fixed bin size based feature extraction method that votes across both time and spike count features can achieve an accuracy of 95% in software similar to previously report methods that use fixed number of bins per sample while using ~3× less energy and ~25× less memory for feature extraction (~1.5× less overall). Hardware measurements for the same topology show a slightly reduced accuracy of 94% that can be attributed to the extra correlations in hardware random weights. The hardware accuracy can be increased by further increasing the number of hidden nodes in ELM at the cost of memory and energy.

14.
J Neurosci ; 26(39): 9873-80, 2006 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-17005851

RESUMEN

A large fraction of homozygous zebrafish mutant belladonna (bel) larvae display a reversed optokinetic response (OKR) that correlates with failure of the retinal ganglion cells to cross the midline and form the optic chiasm. Some of these achiasmatic mutants display strong spontaneous eye oscillations (SOs) in the absence of motion in the surround. The presentation of a stationary grating was necessary and sufficient to evoke SO. Both OKR reversal and SO depend on vision and are contrast sensitive. We built a quantitative model derived from bel fwd (forward) eye behaviors. To mimic the achiasmatic condition, we reversed the sign of the retinal slip velocity in the model, thereby successfully reproducing both reversed OKR and SO. On the basis of the OKR data, and with the support of the quantitative model, we hypothesize that the reversed OKR and the SO can be completely attributed to RGC misrouting. The strong resemblance between the SO and congenital nystagmus (CN) seen in humans with defective retinotectal projections implies that CN, of so far unknown etiology, may be directly caused by a projection defect.


Asunto(s)
Modelos Animales de Enfermedad , Proteínas del Tejido Nervioso/deficiencia , Nistagmo Optoquinético/fisiología , Nistagmo Patológico/genética , Quiasma Óptico/patología , Células Ganglionares de la Retina/patología , Proteínas de Pez Cebra/deficiencia , Pez Cebra/fisiología , Animales , Axones/patología , Simulación por Computador , Sensibilidad de Contraste/genética , Sensibilidad de Contraste/fisiología , Cruzamientos Genéticos , Movimientos Oculares/genética , Movimientos Oculares/fisiología , Proteínas con Homeodominio LIM , Larva , Modelos Neurológicos , Morfogénesis/genética , Percepción de Movimiento/fisiología , Proteínas del Tejido Nervioso/genética , Nistagmo Optoquinético/genética , Nistagmo Patológico/congénito , Nistagmo Patológico/patología , Estimulación Luminosa , Factores de Transcripción , Pez Cebra/anatomía & histología , Pez Cebra/genética , Proteínas de Pez Cebra/genética
15.
Front Neurosci ; 11: 682, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29375284

RESUMEN

Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

16.
Front Neurosci ; 9: 347, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26528113

RESUMEN

Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital) integrated circuit (IC) cochleas because of multiple non-linearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR) conditions (SNR < -5 dB) gives a better classification performance than the original SNR input in this word recognition task.

17.
IEEE Trans Biomed Circuits Syst ; 9(2): 207-16, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25879969

RESUMEN

Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Movimiento (Física) , Silicio/química , Algoritmos , Biomimética , Modelos Neurológicos , Retina , Visión Ocular
18.
Front Neurosci ; 9: 222, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26217169

RESUMEN

Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.

19.
Front Neurosci ; 9: 206, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26106288

RESUMEN

Spike-based neuromorphic sensors such as retinas and cochleas, change the way in which the world is sampled. Instead of producing data sampled at a constant rate, these sensors output spikes that are asynchronous and event driven. The event-based nature of neuromorphic sensors implies a complete paradigm shift in current perception algorithms toward those that emphasize the importance of precise timing. The spikes produced by these sensors usually have a time resolution in the order of microseconds. This high temporal resolution is a crucial factor in learning tasks. It is also widely used in the field of biological neural networks. Sound localization for instance relies on detecting time lags between the two ears which, in the barn owl, reaches a temporal resolution of 5 µs. Current available neuromorphic computation platforms such as SpiNNaker often limit their users to a time resolution in the order of milliseconds that is not compatible with the asynchronous outputs of neuromorphic sensors. To overcome these limitations and allow for the exploration of new types of neuromorphic computing architectures, we introduce a novel software framework on the SpiNNaker platform. This framework allows for simulations of spiking networks and plasticity mechanisms using a completely asynchronous and event-based scheme running with a microsecond time resolution. Results on two example networks using this new implementation are presented.

20.
Vision Res ; 44(17): 2083-9, 2004.
Artículo en Inglés | MEDLINE | ID: mdl-15149839

RESUMEN

Examples that show the transfer of our basic knowledge of brain function into practical electronic models are rare. Here we present a user-friendly silicon model of the early visual system that contributes to animal welfare. The silicon chip emulates the neurons in the visual system by using analog Very Large Scale Integration (aVLSI) circuits. It substitutes for a live animal in experiment design and lecture demonstrations. The neurons on this chip display properties that are central to biological vision: receptive fields, spike coding, adaptation, band-pass filtering, and complementary signaling. Unlike previous laboratory devices whose complexity was limited by the use of discrete components on printed circuit boards, this battery-powered chip is a self-contained patch of the visual system. The realistic responses of the chip's cells and the self-contained adjustment-free correct operation of the chip suggest the possibility of implementation of similar circuits for visual prosthetics.


Asunto(s)
Neuronas , Retina/citología , Silicio , Visión Ocular , Animales , Simulación por Computador , Diseño de Equipo , Modelos Neurológicos , Redes Neurales de la Computación , Retina/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA