Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 28
Filter
Add more filters










Publication year range
1.
J Neural Eng ; 21(2)2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38547534

ABSTRACT

Objective.We analyze and interpret arm and forearm muscle activity in relation with the kinematics of hand pre-shaping during reaching and grasping from the perspective of human synergistic motor control.Approach.Ten subjects performed six tasks involving reaching, grasping and object manipulation. We recorded electromyographic (EMG) signals from arm and forearm muscles with a mix of bipolar electrodes and high-density grids of electrodes. Motion capture was concurrently recorded to estimate hand kinematics. Muscle synergies were extracted separately for arm and forearm muscles, and postural synergies were extracted from hand joint angles. We assessed whether activation coefficients of postural synergies positively correlate with and can be regressed from activation coefficients of muscle synergies. Each type of synergies was clustered across subjects.Main results.We found consistency of the identified synergies across subjects, and we functionally evaluated synergy clusters computed across subjects to identify synergies representative of all subjects. We found a positive correlation between pairs of activation coefficients of muscle and postural synergies with important functional implications. We demonstrated a significant positive contribution in the combination between arm and forearm muscle synergies in estimating hand postural synergies with respect to estimation based on muscle synergies of only one body segment, either arm or forearm (p< 0.01). We found that dimensionality reduction of multi-muscle EMG root mean square (RMS) signals did not significantly affect hand posture estimation, as demonstrated by comparable results with regression of hand angles from EMG RMS signals.Significance.We demonstrated that hand posture prediction improves by combining activity of arm and forearm muscles and we evaluate, for the first time, correlation and regression between activation coefficients of arm muscle and hand postural synergies. Our findings can be beneficial for myoelectric control of hand prosthesis and upper-limb exoskeletons, and for biomarker evaluation during neurorehabilitation.


Subject(s)
Arm , Forearm , Humans , Arm/physiology , Electromyography/methods , Muscle, Skeletal/physiology , Hand/physiology , Posture/physiology
2.
Article in English | MEDLINE | ID: mdl-37450365

ABSTRACT

We propose a neuromorphic framework to process the activity of human spinal motor neurons for movement intention recognition. This framework is integrated into a non-invasive interface that decodes the activity of motor neurons innervating intrinsic and extrinsic hand muscles. One of the main limitations of current neural interfaces is that machine learning models cannot exploit the efficiency of the spike encoding operated by the nervous system. Spiking-based pattern recognition would detect the spatio-temporal sparse activity of a neuronal pool and lead to adaptive and compact implementations, eventually running locally in embedded systems. Emergent Spiking Neural Networks (SNN) have not yet been used for processing the activity of in-vivo human neurons. Here we developed a convolutional SNN to process a total of 467 spinal motor neurons whose activity was identified in 5 participants while executing 10 hand movements. The classification accuracy approached 0.95 ±0.14 for both isometric and non-isometric contractions. These results show for the first time the potential of highly accurate motion intent detection by combining non-invasive neural interfaces and SNN.


Subject(s)
Motor Neurons , Wearable Electronic Devices , Humans , Motor Neurons/physiology , Neural Networks, Computer , Hand , Recognition, Psychology
3.
Sensors (Basel) ; 23(7)2023 Apr 03.
Article in English | MEDLINE | ID: mdl-37050759

ABSTRACT

Event cameras measure scene changes with high temporal resolutions, making them well-suited for visual motion estimation. The activation of pixels results in an asynchronous stream of digital data (events), which rolls continuously over time without the discrete temporal boundaries typical of frame-based cameras (where a data packet or frame is emitted at a fixed temporal rate). As such, it is not trivial to define a priori how to group/accumulate events in a way that is sufficient for computation. The suitable number of events can greatly vary for different environments, motion patterns, and tasks. In this paper, we use neural networks for rotational motion estimation as a scenario to investigate the appropriate selection of event batches to populate input tensors. Our results show that batch selection has a large impact on the results: training should be performed on a wide variety of different batches, regardless of the batch selection method; a simple fixed-time window is a good choice for inference with respect to fixed-count batches, and it also demonstrates comparable performance to more complex methods. Our initial hypothesis that a minimal amount of events is required to estimate motion (as in contrast maximization) is not valid when estimating motion with a neural network.

4.
Front Neurosci ; 16: 951164, 2022.
Article in English | MEDLINE | ID: mdl-36440280

ABSTRACT

Spatio-temporal pattern recognition is a fundamental ability of the brain which is required for numerous real-world activities. Recent deep learning approaches have reached outstanding accuracies in such tasks, but their implementation on conventional embedded solutions is still very computationally and energy expensive. Tactile sensing in robotic applications is a representative example where real-time processing and energy efficiency are required. Following a brain-inspired computing approach, we propose a new benchmark for spatio-temporal tactile pattern recognition at the edge through Braille letter reading. We recorded a new Braille letters dataset based on the capacitive tactile sensors of the iCub robot's fingertip. We then investigated the importance of spatial and temporal information as well as the impact of event-based encoding on spike-based computation. Afterward, we trained and compared feedforward and recurrent Spiking Neural Networks (SNNs) offline using Backpropagation Through Time (BPTT) with surrogate gradients, then we deployed them on the Intel Loihi neuromorphic chip for fast and efficient inference. We compared our approach to standard classifiers, in particular to the Long Short-Term Memory (LSTM) deployed on the embedded NVIDIA Jetson GPU, in terms of classification accuracy, power, and energy consumption together with computational delay. Our results show that the LSTM reaches ~97% of accuracy, outperforming the recurrent SNN by ~17% when using continuous frame-based data instead of event-based inputs. However, the recurrent SNN on Loihi with event-based inputs is ~500 times more energy-efficient than the LSTM on Jetson, requiring a total power of only ~30 mW. This work proposes a new benchmark for tactile sensing and highlights the challenges and opportunities of event-based encoding, neuromorphic hardware, and spike-based computing for spatio-temporal pattern recognition at the edge.

5.
Article in English | MEDLINE | ID: mdl-35649683

ABSTRACT

This short narrative review describes the use of the comet assay to evaluate the formation of genotoxic compounds in the gut lumen in human studies. The fecal water genotoxicity assay is based on ability of the gut content to induce genotoxicity in a cellular model, employing the aqueous component of the feces (fecal water) as this is supposed to contain most of the reactive species and to convey them to the intestinal epithelium. This non-invasive and low-cost assay has been demonstrated to be associated with colon cancer risk in animal models, and although the final validation against human tumors is lacking, it is widely used as a colo-rectal cancer risk biomarker in human nutritional intervention studies. The contribution given to the field of nutrition and cancer by the FW genotoxicity assay is highlighted, particularly in conjunction with other risk biomarkers, to shed light on the complex relationship among diet, microbiota, individual subject characteristics and the formation of genotoxic compounds in the gut.


Subject(s)
Colonic Neoplasms , Animals , Biomarkers , Colonic Neoplasms/genetics , Comet Assay , Humans , Water
6.
Sci Rep ; 12(1): 7645, 2022 05 10.
Article in English | MEDLINE | ID: mdl-35538154

ABSTRACT

To interact with its environment, a robot working in 3D space needs to organise its visual input in terms of objects or their perceptual precursors, proto-objects. Among other visual cues, depth is a submodality used to direct attention to visual features and objects. Current depth-based proto-object attention models have been implemented for standard RGB-D cameras that produce synchronous frames. In contrast, event cameras are neuromorphic sensors that loosely mimic the function of the human retina by asynchronously encoding per-pixel brightness changes at very high temporal resolution, thereby providing advantages like high dynamic range, efficiency (thanks to their high degree of signal compression), and low latency. We propose a bio-inspired bottom-up attention model that exploits event-driven sensing to generate depth-based saliency maps that allow a robot to interact with complex visual input. We use event-cameras mounted in the eyes of the iCub humanoid robot to directly extract edge, disparity and motion information. Real-world experiments demonstrate that our system robustly selects salient objects near the robot in the presence of clutter and dynamic scene changes, for the benefit of downstream applications like object segmentation, tracking and robot interaction with external objects.


Subject(s)
Robotics , Humans , Motion
7.
Nat Commun ; 13(1): 1415, 2022 Mar 11.
Article in English | MEDLINE | ID: mdl-35277530
8.
Nat Commun ; 13(1): 1024, 2022 02 23.
Article in English | MEDLINE | ID: mdl-35197450

ABSTRACT

The design of robots that interact autonomously with the environment and exhibit complex behaviours is an open challenge that can benefit from understanding what makes living beings fit to act in the world. Neuromorphic engineering studies neural computational principles to develop technologies that can provide a computing substrate for building compact and low-power processing systems. We discuss why endowing robots with neuromorphic technologies - from perception to motor control - represents a promising approach for the creation of robots which can seamlessly integrate in society. We present initial attempts in this direction, highlight open challenges, and propose actions required to overcome current limitations.


Subject(s)
Intelligence , Neural Networks, Computer , Engineering
9.
IEEE Trans Pattern Anal Mach Intell ; 44(1): 154-180, 2022 01.
Article in English | MEDLINE | ID: mdl-32750812

ABSTRACT

Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of µs), very high dynamic range (140 dB versus 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.


Subject(s)
Algorithms , Robotics , Neural Networks, Computer
10.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 10087-10098, 2022 Dec.
Article in English | MEDLINE | ID: mdl-34910630

ABSTRACT

There have been a number of corner detection methods proposed for event cameras in the last years, since event-driven computer vision has become more accessible. Current state-of-the-art have either unsatisfactory accuracy or real-time performance when considered for practical use, for example when a camera is randomly moved in an unconstrained environment. In this paper, we present yet another method to perform corner detection, dubbed look-up event-Harris (luvHarris), that employs the Harris algorithm for high accuracy but manages an improved event throughput. Our method has two major contributions, 1. a novel "threshold ordinal event-surface" that removes certain tuning parameters and is well suited for Harris operations, and 2. an implementation of the Harris algorithm such that the computational load per event is minimised and computational heavy convolutions are performed only 'as-fast-as-possible', i.e., only as computational resources are available. The result is a practical, real-time, and robust corner detector that runs more than 2.6× the speed of current state-of-the-art; a necessity when using a high-resolution event-camera in real-time. We explain the considerations taken for the approach, compare the algorithm to current state-of-the-art in terms of computational performance and detection accuracy, and discuss the validity of the proposed approach for event cameras.

11.
Sensors (Basel) ; 20(24)2020 Dec 10.
Article in English | MEDLINE | ID: mdl-33321842

ABSTRACT

Event camera (EC) emerges as a bio-inspired sensor which can be an alternative or complementary vision modality with the benefits of energy efficiency, high dynamic range, and high temporal resolution coupled with activity dependent sparse sensing. In this study we investigate with ECs the problem of face pose alignment, which is an essential pre-processing stage for facial processing pipelines. EC-based alignment can unlock all these benefits in facial applications, especially where motion and dynamics carry the most relevant information due to the temporal change event sensing. We specifically aim at efficient processing by developing a coarse alignment method to handle large pose variations in facial applications. For this purpose, we have prepared by multiple human annotations a dataset of extreme head rotations with varying motion intensity. We propose a motion detection based alignment approach in order to generate activity dependent pose-events that prevents unnecessary computations in the absence of pose change. The alignment is realized by cascaded regression of extremely randomized trees. Since EC sensors perform temporal differentiation, we characterize the performance of the alignment in terms of different levels of head movement speeds and face localization uncertainty ranges as well as face resolution and predictor complexity. Our method obtained 2.7% alignment failure on average, whereas annotator disagreement was 1%. The promising coarse alignment performance on EC sensor data together with a comprehensive analysis demonstrate the potential of ECs in facial applications.


Subject(s)
Face , Head , Data Analysis , Head Movements , Humans , Photography
12.
Nat Commun ; 11(1): 4030, 2020 08 12.
Article in English | MEDLINE | ID: mdl-32788588

ABSTRACT

Sensory information processing in robot skins currently rely on a centralized approach where signal transduction (on the body) is separated from centralized computation and decision-making, requiring the transfer of large amounts of data from periphery to central processors, at the cost of wiring, latency, fault tolerance and robustness. We envision a decentralized approach where intelligence is embedded in the sensing nodes, using a unique neuromorphic methodology to extract relevant information in robotic skins. Here we specifically address pain perception and the association of nociception with tactile perception to trigger the escape reflex in a sensorized robotic arm. The proposed system comprises self-healable materials and memtransistors as enabling technologies for the implementation of neuromorphic nociceptors, spiking local associative learning and communication. Configuring memtransistors as gated-threshold and -memristive switches, the demonstrated system features in-memory edge computing with minimal hardware circuitry and wiring, and enhanced fault tolerance and robustness.


Subject(s)
Robotics , Signal Processing, Computer-Assisted , Transistors, Electronic , Action Potentials/physiology , Logic , Neuronal Plasticity/physiology , Nociception , Presynaptic Terminals/physiology
13.
Front Neurosci ; 14: 551, 2020.
Article in English | MEDLINE | ID: mdl-32655350

ABSTRACT

In this work, we present a neuromorphic architecture for head pose estimation and scene representation for the humanoid iCub robot. The spiking neuronal network is fully realized in Intel's neuromorphic research chip, Loihi, and precisely integrates the issued motor commands to estimate the iCub's head pose in a neuronal path-integration process. The neuromorphic vision system of the iCub is used to correct for drift in the pose estimation. Positions of objects in front of the robot are memorized using on-chip synaptic plasticity. We present real-time robotic experiments using 2 degrees of freedom (DoF) of the robot's head and show precise path integration, visual reset, and object position learning on-chip. We discuss the requirements for integrating the robotic system and neuromorphic hardware with current technologies.

14.
Front Neurosci ; 14: 451, 2020.
Article in English | MEDLINE | ID: mdl-32457575

ABSTRACT

Attentional selectivity tends to follow events considered as interesting stimuli. Indeed, the motion of visual stimuli present in the environment attract our attention and allow us to react and interact with our surroundings. Extracting relevant motion information from the environment presents a challenge with regards to the high information content of the visual input. In this work we propose a novel integration between an eccentric down-sampling of the visual field, taking inspiration from the varying size of receptive fields (RFs) in the mammalian retina, and the Spiking Elementary Motion Detector (sEMD) model. We characterize the system functionality with simulated data and real world data collected with bio-inspired event driven cameras, successfully implementing motion detection along the four cardinal directions and diagonally.

15.
Science ; 360(6392): 966-967, 2018 06 01.
Article in English | MEDLINE | ID: mdl-29853674
16.
Sci Robot ; 2(13)2017 12 20.
Article in English | MEDLINE | ID: mdl-33157880

ABSTRACT

The iCub open-source humanoid robot child is a successful initiative supporting research in embodied artificial intelligence.

17.
IEEE Trans Biomed Circuits Syst ; 11(6): 1271-1277, 2017 12.
Article in English | MEDLINE | ID: mdl-29293423

ABSTRACT

Homeostatic plasticity is a stabilizing mechanism commonly observed in real neural systems that allows neurons to maintain their activity around a functional operating point. This phenomenon can be used in neuromorphic systems to compensate for slowly changing conditions or chronic shifts in the system configuration. However, to avoid interference with other adaptation or learning processes active in the neuromorphic system, it is important that the homeostatic plasticity mechanism operates on time scales that are much longer than conventional synaptic plasticity ones. In this paper we present an ultralow leakage circuit, integrated into an automatic gain control scheme, that can implement the synaptic scaling homeostatic process over extremely long time scales. Synaptic scaling consists in globally scaling the synaptic weights of all synapses impinging onto a neuron maintaining their relative differences, to preserve the effects of learning. The scheme we propose controls the global gain of analog log-domain synapse circuits to keep the neuron's average firing rate constant around a set operating point, over extremely long time scales. To validate the proposed scheme, we implemented the ultralow leakage synaptic scaling homeostatic plasticity circuit in a standard 0.18 m complementary metal-oxide-semiconductor process, and integrated it in an array of dynamic synapses connected to an adaptive integrate and fire neuron. The circuit occupies a silicon area of 84  m 22 m and consumes approximately 10.8 nW with a 1.8 V supply voltage. We present experimental results from the homeostatic circuit and demonstrate how it can be configured to exhibit time scales of up to 100 ks, thanks to a controllable leakage current that can be scaled down to 0.45 aA (2.8 electrons per second).


Subject(s)
Neuronal Plasticity/physiology , Synapses/physiology , Animals , Humans , Neurons/physiology , Semiconductors
18.
Front Neurosci ; 10: 563, 2016.
Article in English | MEDLINE | ID: mdl-28018162

ABSTRACT

Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive.

19.
Nat Mater ; 15(9): 921-5, 2016 08 24.
Article in English | MEDLINE | ID: mdl-27554988
SELECTION OF CITATIONS
SEARCH DETAIL
...