Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(4)2023 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-36850662

RESUMEN

Hand gesture recognition applications based on surface electromiographic (sEMG) signals can benefit from on-device execution to achieve faster and more predictable response times and higher energy efficiency. However, deploying state-of-the-art deep learning (DL) models for this task on memory-constrained and battery-operated edge devices, such as wearables, requires a careful optimization process, both at design time, with an appropriate tuning of the DL models' architectures, and at execution time, where the execution of large and computationally complex models should be avoided unless strictly needed. In this work, we pursue both optimization targets, proposing a novel gesture recognition system that improves upon the state-of-the-art models both in terms of accuracy and efficiency. At the level of DL model architecture, we apply for the first time tiny transformer models (which we call bioformers) to sEMG-based gesture recognition. Through an extensive architecture exploration, we show that our most accurate bioformer achieves a higher classification accuracy on the popular Non-Invasive Adaptive hand Prosthetics Database 6 (Ninapro DB6) dataset compared to the state-of-the-art convolutional neural network (CNN) TEMPONet (+3.1%). When deployed on the RISC-V-based low-power system-on-chip (SoC) GAP8, bioformers that outperform TEMPONet in accuracy consume 7.8×-44.5× less energy per inference. At runtime, we propose a three-level dynamic inference approach that combines a shallow classifier, i.e., a random forest (RF) implementing a simple "rest detector" with two bioformers of different accuracy and complexity, which are sequentially applied to each new input, stopping the classification early for "easy" data. With this mechanism, we obtain a flexible inference system, capable of working in many different operating points in terms of accuracy and average energy consumption. On GAP8, we obtain a further 1.03×-1.35× energy reduction compared to static bioformers at iso-accuracy.


Asunto(s)
Suministros de Energía Eléctrica , Gestos , Humanos , Fenómenos Físicos , Bases de Datos Factuales , Fatiga
2.
Sensors (Basel) ; 22(24)2022 Dec 14.
Artículo en Inglés | MEDLINE | ID: mdl-36560172

RESUMEN

Recent studies show that the integrity of core perceptual and cognitive functions may be tested in a short time with Steady-State Visual Evoked Potentials (SSVEP) with low stimulation frequencies, between 1 and 10 Hz. Wearable EEG systems provide unique opportunities to test these brain functions on diverse populations in out-of-the-lab conditions. However, they also pose significant challenges as the number of EEG channels is typically limited, and the recording conditions might induce high noise levels, particularly for low frequencies. Here we tested the performance of Normalized Canonical Correlation Analysis (NCCA), a frequency-normalized version of CCA, to quantify SSVEP from wearable EEG data with stimulation frequencies ranging from 1 to 10 Hz. We validated NCCA on data collected with an 8-channel wearable wireless EEG system based on BioWolf, a compact, ultra-light, ultra-low-power recording platform. The results show that NCCA correctly and rapidly detects SSVEP at the stimulation frequency within a few cycles of stimulation, even at the lowest frequency (4 s recordings are sufficient for a stimulation frequency of 1 Hz), outperforming a state-of-the-art normalized power spectral measure. Importantly, no preliminary artifact correction or channel selection was required. Potential applications of these results to research and clinical studies are discussed.


Asunto(s)
Interfaces Cerebro-Computador , Dispositivos Electrónicos Vestibles , Electroencefalografía/métodos , Potenciales Evocados Visuales , Análisis de Correlación Canónica , Estimulación Luminosa/métodos , Algoritmos
3.
Sensors (Basel) ; 21(2)2021 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-33429868

RESUMEN

This work describes the design, implementation, and validation of a wireless sensor network for predictive maintenance and remote monitoring in metal-rich, electromagnetically harsh environments. Energy is provided wirelessly at 2.45 GHz employing a system of three co-located active antennas designed with a conformal shape such that it can power, on-demand, sensor nodes located in non-line-of-sight (NLOS) and difficult-to-reach positions. This allows for eliminating the periodic battery replacement of the customized sensor nodes, which are designed to be compact, low-power, and robust. A measurement campaign has been conducted in a real scenario, i.e., the engine compartment of a car, assuming the exploitation of the system in the automotive field. Our work demonstrates that a one radio-frequency (RF) source (illuminator) with a maximum effective isotropic radiated power (EIRP) of 27 dBm is capable of transferring the energy of 4.8 mJ required to fully charge the sensor node in less than 170 s, in the worst case of 112-cm distance between illuminator and node (NLOS). We also show how, in the worst case, the transferred power allows the node to operate every 60 s, where operation includes sampling accelerometer data for 1 s, extracting statistical information, transmitting a 20-byte payload, and receiving a 3-byte acknowledgment using the extremely robust Long Range (LoRa) communication technology. The energy requirement for an active cycle is between 1.45 and 1.65 mJ, while sleep mode current consumption is less than 150 nA, allowing for achieving the targeted battery-free operation with duty cycles as high as 1.7%.

4.
Sensors (Basel) ; 21(4)2021 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-33668645

RESUMEN

Standard-sized autonomous vehicles have rapidly improved thanks to the breakthroughs of deep learning. However, scaling autonomous driving to mini-vehicles poses several challenges due to their limited on-board storage and computing capabilities. Moreover, autonomous systems lack robustness when deployed in dynamic environments where the underlying distribution is different from the distribution learned during training. To address these challenges, we propose a closed-loop learning flow for autonomous driving mini-vehicles that includes the target deployment environment in-the-loop. We leverage a family of compact and high-throughput tinyCNNs to control the mini-vehicle that learn by imitating a computer vision algorithm, i.e., the expert, in the target environment. Thus, the tinyCNNs, having only access to an on-board fast-rate linear camera, gain robustness to lighting conditions and improve over time. Moreover, we introduce an online predictor that can choose between different tinyCNN models at runtime-trading accuracy and latency-which minimises the inference's energy consumption by up to 3.2×. Finally, we leverage GAP8, a parallel ultra-low-power RISC-V-based micro-controller unit (MCU), to meet the real-time inference requirements. When running the family of tinyCNNs, our solution running on GAP8 outperforms any other implementation on the STM32L4 and NXP k64f (traditional single-core MCUs), reducing the latency by over 13× and the energy consumption by 92%.

5.
Philos Trans A Math Phys Eng Sci ; 378(2164): 20190155, 2020 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-31865877

RESUMEN

We present PULP-NN, an optimized computing library for a parallel ultra-low-power tightly coupled cluster of RISC-V processors. The key innovation in PULP-NN is a set of kernels for quantized neural network inference, targeting byte and sub-byte data types, down to INT-1, tuned for the recent trend toward aggressive quantization in deep neural network inference. The proposed library exploits both the digital signal processing extensions available in the PULP RISC-V processors and the cluster's parallelism, achieving up to 15.5 MACs/cycle on INT-8 and improving performance by up to 63 × with respect to a sequential implementation on a single RISC-V core implementing the baseline RV32IMC ISA. Using PULP-NN, a CIFAR-10 network on an octa-core cluster runs in 30 × and 19.6 × less clock cycles than the current state-of-the-art ARM CMSIS-NN library, running on STM32L4 and STM32H7 MCUs, respectively. The proposed library, when running on a GAP-8 processor, outperforms by 36.8 × and by 7.45 × the execution on energy efficient MCUs such as STM32L4 and high-end MCUs such as STM32H7 respectively, when operating at the maximum frequency. The energy efficiency on GAP-8 is 14.1 × higher than STM32L4 and 39.5 × higher than STM32H7, at the maximum efficiency operating point. This article is part of the theme issue 'Harmonizing energy-autonomous computing and intelligence'.

6.
Sensors (Basel) ; 19(4)2019 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-30781662

RESUMEN

LoRaWAN is one of the most promising standards for long-range sensing applications. However, the high number of end devices expected in at-scale deployment, combined with the absence of an effective synchronization scheme, challenge the scalability of this standard. In this paper, we present an approach to increase network throughput through a Slotted-ALOHA overlay on LoRaWAN networks. To increase the single channel capacity, we propose to regulate the communication of LoRaWAN networks using a Slotted-ALOHA variant on the top of the Pure-ALOHA approach used by the standard; thus, no modification in pre-existing libraries is necessary. Our method is based on an innovative synchronization service that is suitable for low-cost wireless sensor nodes. We modelled the LoRaWAN channel with extensive measurement on hardware platforms, and we quantified the impact of tuning parameters on physical and medium access control layers, as well as the packet collision rate. Results show that Slotted-ALOHA supported by our synchronization service significantly improves the performance of traditional LoRaWAN networks regarding packet loss rate and network throughput.

7.
Sensors (Basel) ; 19(19)2019 Oct 05.
Artículo en Inglés | MEDLINE | ID: mdl-31590410

RESUMEN

Movement science investigating muscle and tendon functions during locomotion utilizes commercial ultrasound imagers built for medical applications. These limit biomechanics research due to their form factor, range of view, and spatio-temporal resolution. This review systematically investigates the technical aspects of applying ultrasound as a research tool to investigate human and animal locomotion. It provides an overview on the ultrasound systems used and of their operating parameters. We present measured fascicle velocities and discuss the results with respect to operating frame rates during recording. Furthermore, we derive why muscle and tendon functions should be recorded with a frame rate of at least 150 Hz and a range of view of 250 mm. Moreover, we analyze why and how the development of better ultrasound observation devices at the hierarchical level of muscles and tendons can support biomechanics research. Additionally, we present recent technological advances and their possible application. We provide a list of recommendations for the development of a more advanced ultrasound sensor system class targeting biomechanical applications. Looking to the future, mobile, ultrafast ultrasound hardware technologies create immense opportunities to expand the existing knowledge of human and animal movement.

8.
Sensors (Basel) ; 19(12)2019 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-31248091

RESUMEN

We report on a self-sustainable, wireless accelerometer-based system for wear detection in a band saw blade. Due to the combination of low power hardware design, thermal energy harvesting with a small thermoelectric generator (TEG), an ultra-low power wake-up radio, power management and the low complexity algorithm implemented, our solution works perpetually while also achieving high accuracy. The onboard algorithm processes sensor data, extracts features, performs the classification needed for the blade's wear detection, and sends the report wirelessly. Experimental results in a real-world deployment scenario demonstrate that its accuracy is comparable to state-of-the-art algorithms executed on a PC and show the energy-neutrality of the solution using a small thermoelectric generator to harvest energy. The impact of various low-power techniques implemented on the node is analyzed, highlighting the benefits of onboard processing, the nano-power wake-up radio, and the combination of harvesting and low power design. Finally, accurate in-field energy intake measurements, coupled with simulations, demonstrate that the proposed approach is energy autonomous and can work perpetually.


Asunto(s)
Acelerometría , Algoritmos , Monitoreo Fisiológico , Tecnología Inalámbrica , Simulación por Computador , Modelos Teóricos , Probabilidad , Temperatura
9.
Methods ; 129: 96-107, 2017 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-28647609

RESUMEN

EEG is a standard non-invasive technique used in neural disease diagnostics and neurosciences. Frequency-tagging is an increasingly popular experimental paradigm that efficiently tests brain function by measuring EEG responses to periodic stimulation. Recently, frequency-tagging paradigms have proven successful with low stimulation frequencies (0.5-6Hz), but the EEG signal is intrinsically noisy in this frequency range, requiring heavy signal processing and significant human intervention for response estimation. This limits the possibility to process the EEG on resource-constrained systems and to design smart EEG based devices for automated diagnostic. We propose an algorithm for artifact removal and automated detection of frequency tagging responses in a wide range of stimulation frequencies, which we test on a visual stimulation protocol. The algorithm is rooted on machine learning based pattern recognition techniques and it is tailored for a new generation parallel ultra low power processing platform (PULP), reaching performance of more that 90% accuracy in the frequency detection even for very low stimulation frequencies (<1Hz) with a power budget of 56mW.


Asunto(s)
Electroencefalografía/métodos , Aprendizaje Automático , Estimulación Luminosa/métodos , Algoritmos , Artefactos , Humanos
10.
Sensors (Basel) ; 18(11)2018 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-30388782

RESUMEN

Energy efficiency is crucial in the design of battery-powered end devices, such as smart sensors for the Internet of Things applications. Wireless communication between these distributed smart devices consumes significant energy, and even more when data need to reach several kilometers in distance. Low-power and long-range communication technologies such as LoRaWAN are becoming popular in IoT applications. However, LoRaWAN has drawbacks in terms of (i) data latency; (ii) limited control over the end devices by the gateway; and (iii) high rate of packet collisions in a dense network. To overcome these drawbacks, we present an energy-efficient network architecture and a high-efficiency on-demand time-division multiple access (TDMA) communication protocol for IoT improving both the energy efficiency and the latency of standard LoRa networks. We combine the capabilities of short-range wake-up radios to achieve ultra-low power states and asynchronous communication together with the long-range connectivity of LoRa. The proposed approach still works with the standard LoRa protocol, but improves performance with an on-demand TDMA. Thanks to the proposed network and protocol, we achieve a packet delivery ratio of 100% by eliminating the possibility of packet collisions. The network also achieves a round-trip latency on the order of milliseconds with sensing devices dissipating less than 46 mJ when active and 1.83 µ W during periods of inactivity and can last up to three years on a 1200-mAh lithium polymer battery.

11.
Sensors (Basel) ; 18(5)2018 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-29762535

RESUMEN

Wireless sensor nodes are traditionally powered by individual batteries, and a significant effort has been devoted to maximizing the lifetime of these devices. However, as the batteries can only store a finite amount of energy, the network is still doomed to die, and changing the batteries is not always possible. A promising solution is to enable each node to harvest energy directly in its environment, using individual energy harvesters. Moreover, novel ultra-low power wake-up receivers, which allow continuous listening of the channel with negligible power consumption, are emerging. These devices enable asynchronous communication, further reducing the power consumption related to communication, which is typically one the most energy-consuming tasks in wireless sensor networks. Energy harvesting and wake-up receivers can be combined to significantly increase the energy efficiency of sensor networks. In this paper, we propose an energy manager for energy harvesting wireless sensor nodes and an asynchronous medium access control protocol, which exploits ultra-low power wake-up receivers. The two components are designed to work together and especially to fit the stringent constraints of wireless sensor nodes. The proposed approach has been implemented on a real hardware platform and tested in the field. Experimental results demonstrate the benefits of the proposed approach in terms of energy efficiency, power consumption and throughput, which can be up to more than two-times higher compared to traditional schemes.

12.
Sensors (Basel) ; 17(4)2017 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-28420135

RESUMEN

Poliarticulated prosthetic hands represent a powerful tool to restore functionality and improve quality of life for upper limb amputees. Such devices offer, on the same wearable node, sensing and actuation capabilities, which are not equally supported by natural interaction and control strategies. The control in state-of-the-art solutions is still performed mainly through complex encoding of gestures in bursts of contractions of the residual forearm muscles, resulting in a non-intuitive Human-Machine Interface (HMI). Recent research efforts explore the use of myoelectric gesture recognition for innovative interaction solutions, however there persists a considerable gap between research evaluation and implementation into successful complete systems. In this paper, we present the design of a wearable prosthetic hand controller, based on intuitive gesture recognition and a custom control strategy. The wearable node directly actuates a poliarticulated hand and wirelessly interacts with a personal gateway (i.e., a smartphone) for the training and personalization of the recognition algorithm. Through the whole system development, we address the challenge of integrating an efficient embedded gesture classifier with a control strategy tailored for an intuitive interaction between the user and the prosthesis. We demonstrate that this combined approach outperforms systems based on mere pattern recognition, since they target the accuracy of a classification algorithm rather than the control of a gesture. The system was fully implemented, tested on healthy and amputee subjects and compared against benchmark repositories. The proposed approach achieves an error rate of 1.6% in the end-to-end real time control of commonly used hand gestures, while complying with the power and performance budget of a low-cost microcontroller.


Asunto(s)
Gestos , Algoritmos , Amputados , Miembros Artificiales , Electromiografía , Mano , Humanos , Reconocimiento de Normas Patrones Automatizadas , Prótesis e Implantes , Calidad de Vida
13.
Sensors (Basel) ; 15(3): 5058-80, 2015 Mar 02.
Artículo en Inglés | MEDLINE | ID: mdl-25738764

RESUMEN

A key design challenge for successful wireless sensor network (WSN) deployment is a good balance between the collected data resolution and the overall energy consumption. In this paper, we present a WSN solution developed to efficiently satisfy the requirements for long-term monitoring of a historical building. The hardware of the sensor nodes and the network deployment are described and used to collect the data. To improve the network's energy efficiency, we developed and compared two approaches, sharing similar sub-sampling strategies and data reconstruction assumptions: one is based on compressive sensing (CS) and the second is a custom data-driven latent variable-based statistical model (LV). Both approaches take advantage of the multivariate nature of the data collected by a heterogeneous sensor network and reduce the sampling frequency at sub-Nyquist levels. Our comparative analysis highlights the advantages and limitations: signal reconstruction performance is assessed jointly with network-level energy reduction. The performed experiments include detailed performance and energy measurements on the deployed network and explore how the different parameters can affect the overall data accuracy and the energy consumption. The results show how the CS approach achieves better reconstruction accuracy and overall efficiency, with the exception of cases with really aggressive sub-sampling policies.

14.
Artículo en Inglés | MEDLINE | ID: mdl-38787674

RESUMEN

Wearable ultrasound (US) is a novel sensing approach that shows promise in multiple application domains, and specifically in hand gesture recognition (HGR). In fact, US enables to collect information from deep musculoskeletal structures at high spatiotemporal resolution and high signal-to-noise ratio, making it a perfect candidate to complement surface electromyography for improved accuracy performance and on-the-edge classification. However, existing wearable solutions for US-based gesture recognition are not sufficiently low power for continuous, long-term operation. On top of that, practical hardware limitations of wearable US devices (limited power budget, reduced wireless throughput, and restricted computational power) set the need for the compressed size of models for feature extraction and classification. To overcome these limitations, this article presents a novel end-to-end approach for feature extraction from raw musculoskeletal US data suited for edge computing, coupled with an armband for HGR based on a truly wearable (12 cm2, 9 g), ultralow-power (ULP) (16 mW) US probe. The proposed approach uses a 1-D convolutional autoencoder (CAE) to compress raw US data by 20× while preserving the main amplitude features of the envelope signal. The latent features of the autoencoder are used to train an XGBoost classifier for HGR on datasets collected with a custom US armband, considering armband removal/repositioning in between sessions. Our approach achieves a classification accuracy of 96%. Furthermore, the proposed unsupervised feature extraction approach offers generalization capabilities for intersubject use, as demonstrated by testing the pretrained encoder on a different subject and conducting posttraining analysis, revealing that the operations performed by the encoder are subject-independent. The autoencoder is also quantized to 8-bit integers and deployed on a ULP wearable US probe along with the XGBoost classifier, allowing for a gesture recognition rate ≥ 25 Hz and leading to 21% lower power consumption [at 30 frames/s (FPS)] compared to the conventional approach (raw data transmission and remote processing).


Asunto(s)
Gestos , Ultrasonografía , Dispositivos Electrónicos Vestibles , Humanos , Ultrasonografía/métodos , Ultrasonografía/instrumentación , Reconocimiento de Normas Patrones Automatizadas/métodos , Masculino , Aprendizaje Automático no Supervisado , Adulto , Procesamiento de Señales Asistido por Computador , Femenino , Mano/diagnóstico por imagen , Algoritmos , Adulto Joven
15.
Artículo en Inglés | MEDLINE | ID: mdl-38848226

RESUMEN

Spike extraction by blind source separation (BSS) algorithms can successfully extract physiologically meaningful information from the sEMG signal, as they are able to identify motor unit (MU) discharges involved in muscle contractions. However, BSS approaches are currently restricted to isometric contractions, limiting their applicability in real-world scenarios. We present a strategy to track MUs across different dynamic hand gestures using adaptive independent component analysis (ICA): first, a pool of MUs is identified during isometric contractions, and the decomposition parameters are stored; during dynamic gestures, the decomposition parameters are updated online in an unsupervised fashion, yielding the refined MUs; then, a Pan-Tompkins-inspired algorithm detects the spikes in each MUs; finally, the identified spikes are fed to a classifier to recognize the gesture. We validate our approach on a 4-subject, 7-gesture + rest dataset collected with our custom 16-channel dry sEMG armband, achieving an average balanced accuracy of 85.58±14.91% and macro-F1 score of 85.86±14.48%. We deploy our solution onto GAP9, a parallel ultra-low-power microcontroller specialized for computation-intensive linear algebra applications at the edge, obtaining an energy consumption of 4.72 mJ @ 240 MHz and a latency of 121.3 ms for each 200 ms-long window of sEMG signal.

16.
Artículo en Inglés | MEDLINE | ID: mdl-38885102

RESUMEN

Surface electromyography (sEMG) is a State-of-the-Art (SoA) sensing modality for non-invasive human-machine interfaces for consumer, industrial, and rehabilitation use cases. The main limitation of the current sEMG-driven control policies is the sEMG's inherent variability, especially cross-session due to sensor repositioning; this limits the generalization of the Machine/Deep Learning (ML/DL) in charge of the signal-to-command mapping. The other hot front on the ML/DL side of sEMG-driven control is the shift from the classification of fixed hand positions to the regression of hand kinematics and dynamics, promising a more versatile and fluid control. We present an incremental online-training strategy for sEMG-based estimation of simultaneous multi-finger forces, using a small Temporal Convolutional Network suitable for embedded learning-on-device. We validate our method on the HYSER dataset, cross-day. Our incremental online training reaches a cross-day Mean Absolute Error (MAE) of (9.58 ± 3.89)% of the Maximum Voluntary Contraction on HYSER's RANDOM dataset of improvised, non-predefined force sequences, which is the most challenging and closest to real scenarios. This MAE is on par with an accuracy-oriented, non-embeddable offline training exploiting more epochs. Further, we demonstrate that our online training approach can be deployed on the GAP9 ultra-low power microcontroller, obtaining a latency of 1.49 ms and an energy draw of just 40.4 uJ per forward-backward-update step. These results show that our solution fits the requirements for accurate and real-time incremental training-on-device.

17.
Sci Rep ; 14(1): 2980, 2024 02 05.
Artículo en Inglés | MEDLINE | ID: mdl-38316856

RESUMEN

Electroencephalography (EEG) is widely used to monitor epileptic seizures, and standard clinical practice consists of monitoring patients in dedicated epilepsy monitoring units via video surveillance and cumbersome EEG caps. Such a setting is not compatible with long-term tracking under typical living conditions, thereby motivating the development of unobtrusive wearable solutions. However, wearable EEG devices present the challenges of fewer channels, restricted computational capabilities, and lower signal-to-noise ratio. Moreover, artifacts presenting morphological similarities to seizures act as major noise sources and can be misinterpreted as seizures. This paper presents a combined seizure and artifacts detection framework targeting wearable EEG devices based on Gradient Boosted Trees. The seizure detector achieves nearly zero false alarms with average sensitivity values of [Formula: see text] for 182 seizures from the CHB-MIT dataset and [Formula: see text] for 25 seizures from the private dataset with no preliminary artifact detection or removal. The artifact detector achieves a state-of-the-art accuracy of [Formula: see text] (on the TUH-EEG Artifact Corpus dataset). Integrating artifact and seizure detection significantly reduces false alarms-up to [Formula: see text] compared to standalone seizure detection. Optimized for a Parallel Ultra-Low Power platform, these algorithms enable extended monitoring with a battery lifespan reaching 300 h. These findings highlight the benefits of integrating artifact detection in wearable epilepsy monitoring devices to limit the number of false positives.


Asunto(s)
Epilepsia , Dispositivos Electrónicos Vestibles , Humanos , Algoritmos , Artefactos , Electroencefalografía , Epilepsia/diagnóstico , Convulsiones/diagnóstico
18.
IEEE Trans Biomed Circuits Syst ; 18(3): 608-621, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38261487

RESUMEN

The long-term, continuous analysis of electroencephalography (EEG) signals on wearable devices to automatically detect seizures in epileptic patients is a high-potential application field for deep neural networks, and specifically for transformers, which are highly suited for end-to-end time series processing without handcrafted feature extraction. In this work, we propose a small-scale transformer detector, the EEGformer, compatible with unobtrusive acquisition setups that use only the temporal channels. EEGformer is the result of a hardware-oriented design exploration, aiming for efficient execution on tiny low-power micro-controller units (MCUs) and low latency and false alarm rate to increase patient and caregiver acceptance.Tests conducted on the CHB-MIT dataset show a 20% reduction of the onset detection latency with respect to the state-of-the-art model for temporal acquisition, with a competitive 73% seizure detection probability and 0.15 false-positive-per-hour (FP/h). Further investigations on a novel and challenging scalp EEG dataset result in the successful detection of 88% of the annotated seizure events, with 0.45 FP/h.We evaluate the deployment of the EEGformer on three commercial low-power computing platforms: the single-core Apollo4 MCU and the GAP8 and GAP9 parallel MCUs. The most efficient implementation (on GAP9) results in as low as 13.7 ms and 0.31 mJ per inference, demonstrating the feasibility of deploying the EEGformer on wearable seizure detection systems with reduced channel count and multi-day battery duration.


Asunto(s)
Electroencefalografía , Convulsiones , Procesamiento de Señales Asistido por Computador , Dispositivos Electrónicos Vestibles , Humanos , Electroencefalografía/instrumentación , Electroencefalografía/métodos , Convulsiones/diagnóstico , Convulsiones/fisiopatología , Procesamiento de Señales Asistido por Computador/instrumentación , Algoritmos , Redes Neurales de la Computación
19.
Nat Nanotechnol ; 18(5): 479-485, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36997756

RESUMEN

Disentangling the attributes of a sensory signal is central to sensory perception and cognition and hence is a critical task for future artificial intelligence systems. Here we present a compute engine capable of efficiently factorizing high-dimensional holographic representations of combinations of such attributes, by exploiting the computation-in-superposition capability of brain-inspired hyperdimensional computing, and the intrinsic stochasticity associated with analogue in-memory computing based on nanoscale memristive devices. Such an iterative in-memory factorizer is shown to solve at least five orders of magnitude larger problems that cannot be solved otherwise, as well as substantially lowering the computational time and space complexity. We present a large-scale experimental demonstration of the factorizer by employing two in-memory compute chips based on phase-change memristive devices. The dominant matrix-vector multiplication operations take a constant time, irrespective of the size of the matrix, thus reducing the computational time complexity to merely the number of iterations. Moreover, we experimentally demonstrate the ability to reliably and efficiently factorize visual perceptual representations.

20.
Sci Data ; 10(1): 288, 2023 05 18.
Artículo en Inglés | MEDLINE | ID: mdl-37202400

RESUMEN

Supercomputers are the most powerful computing machines available to society. They play a central role in economic, industrial, and societal development. While they are used by scientists, engineers, decision-makers, and data-analyst to computationally solve complex problems, supercomputers and their hosting datacenters are themselves complex power-hungry systems. Improving their efficiency, availability, and resiliency is vital and the subject of many research and engineering efforts. Still, a major roadblock hinders researchers: dearth of reliable data describing the behavior of production supercomputers. In this paper, we present the result of a ten-year-long project to design a monitoring framework (EXAMON) deployed at the Italian supercomputers at CINECA datacenter. We disclose the first holistic dataset of a tier-0 Top10 supercomputer. It includes the management, workload, facility, and infrastructure data of the Marconi100 supercomputer for two and half years of operation. The dataset (published via Zenodo) is the largest ever made public, with a size of 49.9TB before compression. We also provide open-source software modules to simplify access to the data and provide direct usage examples.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA