Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38848226

RESUMO

Spike extraction by blind source separation (BSS) algorithms can successfully extract physiologically meaningful information from the sEMG signal, as they are able to identify motor unit (MU) discharges involved in muscle contractions. However, BSS approaches are currently restricted to isometric contractions, limiting their applicability in real-world scenarios. We present a strategy to track MUs across different dynamic hand gestures using adaptive independent component analysis (ICA): first, a pool of MUs is identified during isometric contractions, and the decomposition parameters are stored; during dynamic gestures, the decomposition parameters are updated online in an unsupervised fashion, yielding the refined MUs; then, a Pan-Tompkins-inspired algorithm detects the spikes in each MUs; finally, the identified spikes are fed to a classifier to recognize the gesture. We validate our approach on a 4-subject, 7-gesture + rest dataset collected with our custom 16-channel dry sEMG armband, achieving an average balanced accuracy of 85.58±14.91% and macro-F1 score of 85.86±14.48%. We deploy our solution onto GAP9, a parallel ultra-low-power microcontroller specialized for computation-intensive linear algebra applications at the edge, obtaining an energy consumption of 4.72 mJ @ 240 MHz and a latency of 121.3 ms for each 200 ms-long window of sEMG signal.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38885102

RESUMO

Surface electromyography (sEMG) is a State-of-the-Art (SoA) sensing modality for non-invasive human-machine interfaces for consumer, industrial, and rehabilitation use cases. The main limitation of the current sEMG-driven control policies is the sEMG's inherent variability, especially cross-session due to sensor repositioning; this limits the generalization of the Machine/Deep Learning (ML/DL) in charge of the signal-to-command mapping. The other hot front on the ML/DL side of sEMG-driven control is the shift from the classification of fixed hand positions to the regression of hand kinematics and dynamics, promising a more versatile and fluid control. We present an incremental online-training strategy for sEMG-based estimation of simultaneous multi-finger forces, using a small Temporal Convolutional Network suitable for embedded learning-on-device. We validate our method on the HYSER dataset, cross-day. Our incremental online training reaches a cross-day Mean Absolute Error (MAE) of (9.58 ± 3.89)% of the Maximum Voluntary Contraction on HYSER's RANDOM dataset of improvised, non-predefined force sequences, which is the most challenging and closest to real scenarios. This MAE is on par with an accuracy-oriented, non-embeddable offline training exploiting more epochs. Further, we demonstrate that our online training approach can be deployed on the GAP9 ultra-low power microcontroller, obtaining a latency of 1.49 ms and an energy draw of just 40.4 uJ per forward-backward-update step. These results show that our solution fits the requirements for accurate and real-time incremental training-on-device.

3.
Artigo em Inglês | MEDLINE | ID: mdl-38787674

RESUMO

Wearable ultrasound is a novel sensing approach that shows promise in multiple application domains, and specifically in hand gesture recognition. In fact, ultrasound enables to collect information from deep musculoskeletal structures at high spatiotemporal resolution and high signal-to-noise ratio, making it a perfect candidate to complement surface electromyography for improved accuracy performance and on-the-edge classification. However, existing wearable solutions for ultrasound-based gesture recognition are not sufficiently low-power for continuous, long-term operation. On top of that, practical hardware limitations of wearable ultrasound devices (limited power budget, reduced wireless throughput, restricted computational power) set the need for the compressed size of models for feature extraction and classification. To overcome these limitations, this paper presents a novel end-to-end approach for feature extraction from raw musculoskeletal ultrasound data suited for edge-computing, coupled with an armband for hand gesture recognition based on a truly wearable (12 cm2, 9 g), ultra-low power (16 mW) ultrasound probe. The proposed approach uses a 1D convolutional autoencoder to compress raw ultrasound data by 20× while preserving the main amplitude features of the envelope signal. The latent features of the autoencoder are used to train an XGBoost classifier for hand gesture recognition on datasets collected with a custom US armband, considering armband removal/repositioning in between sessions. Our approach achieves a classification accuracy of 96%. Furthermore, the proposed unsupervised feature extraction approach offers generalization capabilities for inter-subject use, as demonstrated by testing the pre-trained Encoder on a different subject and conducting post-training analysis, revealing that the operations performed by the Encoder are subject-independent. The autoencoder is also quantized to 8-bit integers and deployed on an ultra-low-power wearable ultrasound probe along with the XGBoost classifier, allowing for a gesture recognition rate ≥ 25 Hz and leading to 21% lower power consumption (at 30 FPS) compared to the conventional approach (raw data transmission and remote processing).

4.
Sci Rep ; 14(1): 2980, 2024 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-38316856

RESUMO

Electroencephalography (EEG) is widely used to monitor epileptic seizures, and standard clinical practice consists of monitoring patients in dedicated epilepsy monitoring units via video surveillance and cumbersome EEG caps. Such a setting is not compatible with long-term tracking under typical living conditions, thereby motivating the development of unobtrusive wearable solutions. However, wearable EEG devices present the challenges of fewer channels, restricted computational capabilities, and lower signal-to-noise ratio. Moreover, artifacts presenting morphological similarities to seizures act as major noise sources and can be misinterpreted as seizures. This paper presents a combined seizure and artifacts detection framework targeting wearable EEG devices based on Gradient Boosted Trees. The seizure detector achieves nearly zero false alarms with average sensitivity values of [Formula: see text] for 182 seizures from the CHB-MIT dataset and [Formula: see text] for 25 seizures from the private dataset with no preliminary artifact detection or removal. The artifact detector achieves a state-of-the-art accuracy of [Formula: see text] (on the TUH-EEG Artifact Corpus dataset). Integrating artifact and seizure detection significantly reduces false alarms-up to [Formula: see text] compared to standalone seizure detection. Optimized for a Parallel Ultra-Low Power platform, these algorithms enable extended monitoring with a battery lifespan reaching 300 h. These findings highlight the benefits of integrating artifact detection in wearable epilepsy monitoring devices to limit the number of false positives.


Assuntos
Epilepsia , Dispositivos Eletrônicos Vestíveis , Humanos , Algoritmos , Artefatos , Eletroencefalografia , Epilepsia/diagnóstico , Convulsões/diagnóstico
5.
IEEE Trans Biomed Circuits Syst ; 18(3): 608-621, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38261487

RESUMO

The long-term, continuous analysis of electroencephalography (EEG) signals on wearable devices to automatically detect seizures in epileptic patients is a high-potential application field for deep neural networks, and specifically for transformers, which are highly suited for end-to-end time series processing without handcrafted feature extraction. In this work, we propose a small-scale transformer detector, the EEGformer, compatible with unobtrusive acquisition setups that use only the temporal channels. EEGformer is the result of a hardware-oriented design exploration, aiming for efficient execution on tiny low-power micro-controller units (MCUs) and low latency and false alarm rate to increase patient and caregiver acceptance.Tests conducted on the CHB-MIT dataset show a 20% reduction of the onset detection latency with respect to the state-of-the-art model for temporal acquisition, with a competitive 73% seizure detection probability and 0.15 false-positive-per-hour (FP/h). Further investigations on a novel and challenging scalp EEG dataset result in the successful detection of 88% of the annotated seizure events, with 0.45 FP/h.We evaluate the deployment of the EEGformer on three commercial low-power computing platforms: the single-core Apollo4 MCU and the GAP8 and GAP9 parallel MCUs. The most efficient implementation (on GAP9) results in as low as 13.7 ms and 0.31 mJ per inference, demonstrating the feasibility of deploying the EEGformer on wearable seizure detection systems with reduced channel count and multi-day battery duration.


Assuntos
Eletroencefalografia , Convulsões , Processamento de Sinais Assistido por Computador , Dispositivos Eletrônicos Vestíveis , Humanos , Eletroencefalografia/instrumentação , Eletroencefalografia/métodos , Convulsões/diagnóstico , Convulsões/fisiopatologia , Processamento de Sinais Assistido por Computador/instrumentação , Algoritmos , Redes Neurais de Computação
6.
Sci Data ; 10(1): 288, 2023 05 18.
Artigo em Inglês | MEDLINE | ID: mdl-37202400

RESUMO

Supercomputers are the most powerful computing machines available to society. They play a central role in economic, industrial, and societal development. While they are used by scientists, engineers, decision-makers, and data-analyst to computationally solve complex problems, supercomputers and their hosting datacenters are themselves complex power-hungry systems. Improving their efficiency, availability, and resiliency is vital and the subject of many research and engineering efforts. Still, a major roadblock hinders researchers: dearth of reliable data describing the behavior of production supercomputers. In this paper, we present the result of a ten-year-long project to design a monitoring framework (EXAMON) deployed at the Italian supercomputers at CINECA datacenter. We disclose the first holistic dataset of a tier-0 Top10 supercomputer. It includes the management, workload, facility, and infrastructure data of the Marconi100 supercomputer for two and half years of operation. The dataset (published via Zenodo) is the largest ever made public, with a size of 49.9TB before compression. We also provide open-source software modules to simplify access to the data and provide direct usage examples.

7.
Nat Nanotechnol ; 18(5): 479-485, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36997756

RESUMO

Disentangling the attributes of a sensory signal is central to sensory perception and cognition and hence is a critical task for future artificial intelligence systems. Here we present a compute engine capable of efficiently factorizing high-dimensional holographic representations of combinations of such attributes, by exploiting the computation-in-superposition capability of brain-inspired hyperdimensional computing, and the intrinsic stochasticity associated with analogue in-memory computing based on nanoscale memristive devices. Such an iterative in-memory factorizer is shown to solve at least five orders of magnitude larger problems that cannot be solved otherwise, as well as substantially lowering the computational time and space complexity. We present a large-scale experimental demonstration of the factorizer by employing two in-memory compute chips based on phase-change memristive devices. The dominant matrix-vector multiplication operations take a constant time, irrespective of the size of the matrix, thus reducing the computational time complexity to merely the number of iterations. Moreover, we experimentally demonstrate the ability to reliably and efficiently factorize visual perceptual representations.

8.
Sensors (Basel) ; 23(4)2023 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-36850662

RESUMO

Hand gesture recognition applications based on surface electromiographic (sEMG) signals can benefit from on-device execution to achieve faster and more predictable response times and higher energy efficiency. However, deploying state-of-the-art deep learning (DL) models for this task on memory-constrained and battery-operated edge devices, such as wearables, requires a careful optimization process, both at design time, with an appropriate tuning of the DL models' architectures, and at execution time, where the execution of large and computationally complex models should be avoided unless strictly needed. In this work, we pursue both optimization targets, proposing a novel gesture recognition system that improves upon the state-of-the-art models both in terms of accuracy and efficiency. At the level of DL model architecture, we apply for the first time tiny transformer models (which we call bioformers) to sEMG-based gesture recognition. Through an extensive architecture exploration, we show that our most accurate bioformer achieves a higher classification accuracy on the popular Non-Invasive Adaptive hand Prosthetics Database 6 (Ninapro DB6) dataset compared to the state-of-the-art convolutional neural network (CNN) TEMPONet (+3.1%). When deployed on the RISC-V-based low-power system-on-chip (SoC) GAP8, bioformers that outperform TEMPONet in accuracy consume 7.8×-44.5× less energy per inference. At runtime, we propose a three-level dynamic inference approach that combines a shallow classifier, i.e., a random forest (RF) implementing a simple "rest detector" with two bioformers of different accuracy and complexity, which are sequentially applied to each new input, stopping the classification early for "easy" data. With this mechanism, we obtain a flexible inference system, capable of working in many different operating points in terms of accuracy and average energy consumption. On GAP8, we obtain a further 1.03×-1.35× energy reduction compared to static bioformers at iso-accuracy.


Assuntos
Fontes de Energia Elétrica , Gestos , Humanos , Fenômenos Físicos , Bases de Dados Factuais , Fadiga
9.
Sensors (Basel) ; 22(24)2022 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-36560172

RESUMO

Recent studies show that the integrity of core perceptual and cognitive functions may be tested in a short time with Steady-State Visual Evoked Potentials (SSVEP) with low stimulation frequencies, between 1 and 10 Hz. Wearable EEG systems provide unique opportunities to test these brain functions on diverse populations in out-of-the-lab conditions. However, they also pose significant challenges as the number of EEG channels is typically limited, and the recording conditions might induce high noise levels, particularly for low frequencies. Here we tested the performance of Normalized Canonical Correlation Analysis (NCCA), a frequency-normalized version of CCA, to quantify SSVEP from wearable EEG data with stimulation frequencies ranging from 1 to 10 Hz. We validated NCCA on data collected with an 8-channel wearable wireless EEG system based on BioWolf, a compact, ultra-light, ultra-low-power recording platform. The results show that NCCA correctly and rapidly detects SSVEP at the stimulation frequency within a few cycles of stimulation, even at the lowest frequency (4 s recordings are sufficient for a stimulation frequency of 1 Hz), outperforming a state-of-the-art normalized power spectral measure. Importantly, no preliminary artifact correction or channel selection was required. Potential applications of these results to research and clinical studies are discussed.


Assuntos
Interfaces Cérebro-Computador , Dispositivos Eletrônicos Vestíveis , Eletroencefalografia/métodos , Potenciais Evocados Visuais , Análise de Correlação Canônica , Estimulação Luminosa/métodos , Algoritmos
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2518-2522, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085653

RESUMO

Low-power wearable systems are essential for medical and industrial applications, but they face crucial implementation challenges when providing energy-efficient compact design while increasing the number of available channels, sampling rate and overall processing power. This work presents a small (39×41mm) wireless embedded low-power HMI device for ExG signals, offering up to 16 channels sampled at up to 4kSPS. By virtue of the high sampling rate and medical-grade signal quality (i.e. compliant with the IFCN standards), BioWolf16 is capable of accurate gesture recognition and enables the possibility to acquire data for neural spikes extraction. When employed over an EMG gesture recognition paradigm, the system achieves 90.24% classification accuracy over nine gestures (16 channels @4kSPS) while requiring only 16mW of power (57h of continuous operation) when deployed on Mr. Wolf MCU, part of the system architecture. The system can also provide up to 14h of real-time data streaming (4kSPS), which can further be extended to 23h when reducing the sampling rate to 1kSPS. Our results also demonstrate that this design outperforms many features of current state-of-the-art systems. Clinical Relevance - This work establishes that BioWolf16 is a wearable ultra-low power device enabling advanced multi-channel streaming and processing of medical-grade EMG signal, that can expand research opportunities and applications in healthcare and industrial scenarios.


Assuntos
Gestos , Dispositivos Eletrônicos Vestíveis , Instalações de Saúde , Indústrias , Reconhecimento Psicológico
11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 3723-3728, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086434

RESUMO

In the context of epilepsy monitoring, EEG artifacts are often mistaken for seizures due to their morphological simi-larity in both amplitude and frequency, making seizure detection systems susceptible to higher false alarm rates. In this work we present the implementation of an artifact detection algorithm based on a minimal number of EEG channels on a parallel ultra-low-power (PULP) embedded platform. The analyses are based on the TUH EEG Artifact Corpus dataset and focus on the temporal electrodes. First, we extract optimal feature models in the frequency domain using an automated machine learning framework, achieving a 93.95% accuracy, with a 0.838 F1 score for a 4 temporal EEG channel setup. The achieved accuracy levels surpass state-of-the-art by nearly 20%. Then, these algorithms are parallelized and optimized for a PULP platform, achieving a 5.21x improvement of energy-efficient compared to state-of-the-art low-power implementations of artifact detection frameworks. Combining this model with a low-power seizure detection algorithm would allow for 300h of continuous monitoring on a 300 mAh battery in a wearable form factor and power budget. These results pave the way for implementing affordable, wearable, long-term epilepsy monitoring solutions with low false-positive rates and high sensitivity, meeting both patients' and caregivers' requirements. Clinical relevance- The proposed EEG artifact detection framework can be employed on wearable EEG recording devices, in combination with EEG-based epilepsy detection algorithms, for improved robustness in epileptic seizure detection scenarios.


Assuntos
Artefatos , Epilepsia , Algoritmos , Eletroencefalografia/métodos , Epilepsia/diagnóstico , Humanos , Convulsões/diagnóstico
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 3139-3145, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086587

RESUMO

In recent years, in-ear electroencephalography (EEG) was demonstrated to record signals of similar quality compared to standard scalp-based EEG, and clinical applications of objective hearing threshold estimations have been reported. Existing devices, however, still lack important features. In fact, most of the available solutions are based on wet electrodes, require to be connected to external acquisition platforms, or do not offer on-board processing capabilities. Here we overcome all these limitations, presenting an ear-EEG system based on dry electrodes that includes all the acquisition, processing, and connectivity electronics directly in the ear bud. The earpiece is equipped with an ultra-low power analog front-end for analog-to-digital conversion, a low-power MEMS microphone, a low-power inertial measurement unit, and an ARM Cortex-M4 based microcontroller enabling on-board processing and Bluetooth Low Energy connectivity. The system can stream raw EEG data or perform data processing directly in-ear. We test the device by analysing its capability to detect brain response to external auditory stimuli, achieving 4 and 1.3 mW power consumption for data streaming or on board processing, respectively. The latter allows for 600 hours operation on a PR44 zinc-air battery. To the best of our knowledge, this is the first wireless and fully self-contained ear-EEG system performing on-board processing, all embedded in a single earbud. Clinical relevance- The proposed ear-EEG system can be employed for diagnostic tasks such as objective hearing threshold estimations, outside of clinical settings, thereby enabling it as a point-of-care solution. The long battery lifetime is also suitable for a continuous monitoring scenario.


Assuntos
Fontes de Energia Elétrica , Eletroencefalografia , Eletrodos , Audição , Couro Cabeludo
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 7077-7082, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892732

RESUMO

Human machine interfaces follow machine learning approaches to interpret muscles states, mainly from electrical signals. These signals are easy to collect with tiny devices, on tight power budgets, interfaced closely to the human skin. However, natural movement behavior is not only determined by muscle activation, but it depends on an orchestration of several subsystems, including the instantaneous length of muscle fibers, typically inspected by means of ultrasound (US) imaging systems. This work shows for the first time an ultra-lightweight (7 g) electromyography (sEMG) system transparent to ultrasound, which enables the simultaneous acquisition of sEMG and US signals from the same location. The system is based on ultrathin and skin-conformable temporary tattoo electrodes (TTE) made of printed conducting polymer, connected to a tiny, parallel-ultra-low power acquisition platform (BioWolf). US phantom images recorded with the TTE had mean axial and lateral resolutions of 0.90±0.02 mm and 1.058±0.005 mm, respectively. The root mean squares for sEMG signals recorded with the US during biceps contractions were at 57±10 µV and mean frequencies were at 92±1 Hz. We show that neither ultrasound images nor electromyographic signals are significantly altered during parallel and synchronized operation.Clinical relevance- Modern prosthetic engineering concepts use interfaces connected to muscles or nerves and employ machine learning models to infer on natural movement behavior of amputated limbs. However, relying only on a single data source (e.g., electromyography) reduces the quality of a fine-grained motor control. To address this limitation, we propose a new and unobtrusive device capable of capturing the electrical and mechanical behavior of muscles in a parallel and synchronized fashion. This device can support the development of new prosthetic control and design concepts, further supporting clinical movement science in the configuration of better simulation models.


Assuntos
Tatuagem , Braço , Eletromiografia , Humanos , Movimento , Músculo Esquelético/diagnóstico por imagem
14.
IEEE Trans Biomed Circuits Syst ; 15(6): 1149-1160, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34932486

RESUMO

Motor imagery (MI) brain-machine interfaces (BMIs) enable us to control machines by merely thinking of performing a motor action. Practical use cases require a wearable solution where the classification of the brain signals is done locally near the sensor using machine learning models embedded on energy-efficient microcontroller units (MCUs), for assured privacy, user comfort, and long-term usage. In this work, we provide practical insights on the accuracy-cost trade-off for embedded BMI solutions. Our multispectral Riemannian classifier reaches 75.1% accuracy on a 4-class MI task. The accuracy is further improved by tuning different types of classifiers to each subject, achieving 76.4%. We further scale down the model by quantizing it to mixed-precision representations with a minimal accuracy loss of 1% and 1.4%, respectively, which is still up to 4.1% more accurate than the state-of-the-art embedded convolutional neural network. We implement the model on a low-power MCU within an energy budget of merely 198 µJ and taking only 16.9 ms per classification. Classifying samples continuously, overlapping the 3.5 s samples by 50% to avoid missing user inputs allows for operation at just 85 µW. Compared to related works in embedded MI-BMIs, our solution sets the new state-of-the-art in terms of accuracy-energy trade-off for near-sensor classification.


Assuntos
Interfaces Cérebro-Computador , Algoritmos , Eletroencefalografia , Imaginação , Redes Neurais de Computação
15.
IEEE Trans Biomed Circuits Syst ; 15(6): 1196-1209, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34673496

RESUMO

Hearth Rate (HR) monitoring is increasingly performed in wrist-worn devices using low-cost photoplethysmography (PPG) sensors. However, Motion Artifacts (MAs) caused by movements of the subject's arm affect the performance of PPG-based HR tracking. This is typically addressed coupling the PPG signal with acceleration measurements from an inertial sensor. Unfortunately, most standard approaches of this kind rely on hand-tuned parameters, which impair their generalization capabilities and their applicability to real data in the field. In contrast, methods based on deep learning, despite their better generalization, are considered to be too complex to deploy on wearable devices. In this work, we tackle these limitations, proposing a design space exploration methodology to automatically generate a rich family of deep Temporal Convolutional Networks (TCNs) for HR monitoring, all derived from a single "seed" model. Our flow involves a cascade of two Neural Architecture Search (NAS) tools and a hardware-friendly quantizer, whose combination yields both highly accurate and extremely lightweight models. When tested on the PPG-Dalia dataset, our most accurate model sets a new state-of-the-art in Mean Absolute Error. Furthermore, we deploy our TCNs on an embedded platform featuring a STM32WB55 microcontroller, demonstrating their suitability for real-time execution. Our most accurate quantized network achieves 4.41 Beats Per Minute (BPM) of Mean Absolute Error (MAE), with an energy consumption of 47.65 mJ and a memory footprint of 412 kB. At the same time, the smallest network that obtains a MAE 8 BPM, among those generated by our flow, has a memory footprint of 1.9 kB and consumes just 1.79 mJ per inference.


Assuntos
Fotopletismografia , Dispositivos Eletrônicos Vestíveis , Algoritmos , Artefatos , Frequência Cardíaca/fisiologia , Processamento de Sinais Assistido por Computador
16.
IEEE Trans Biomed Circuits Syst ; 15(5): 926-937, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34559663

RESUMO

Wearable, intelligent, and unobtrusive sensor nodes that monitor the human body and the surrounding environment have the potential to create valuable data for preventive human-centric ubiquitous healthcare. To attain this vision of unobtrusiveness, the smart devices have to gather and analyze data over long periods of time without the need for battery recharging or replacement. This article presents a software-configurable kinetic energy harvesting and power management circuit that enables self-sustainable wearable devices. By exploiting the kinetic transducer as an energy source and an activity sensor simultaneously, the proposed circuit provides highly efficient context-aware control features. Its mixed-signal nano-power context awareness allows reaching energy neutrality even in energy-drought periods, thus significantly relaxing the energy storage requirements. Furthermore, the asynchronous sensing approach also doubles as a coarse-grained human activity recognition frontend. Experimental results, using commercial micro-kinetic generators, demonstrate the flexibility and potential of this approach: the circuit achieves a quiescent current of 57 nA and a maximum load current of 300 mA, delivered with a harvesting efficiency of 79%. Based on empirically collected motion data, the system achieves an energy surplus of over 232 mJ per day in a wrist-worn application while executing activity recognition at an accuracy of 89% and a latency of 60 s.


Assuntos
Dispositivos Eletrônicos Vestíveis , Fontes de Energia Elétrica , Humanos , Monitorização Fisiológica , Movimento (Física) , Punho
17.
Nat Commun ; 12(1): 5546, 2021 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-34545090

RESUMO

The mitigation of rapid mass movements involves a subtle interplay between field surveys, numerical modelling, and experience. Hazard engineers rely on a combination of best practices and, if available, historical facts as a vital prerequisite in establishing reproducible and accurate hazard zoning. Full-scale field tests have been performed to reinforce the physical understanding of debris flows and snow avalanches. Rockfall dynamics are - especially the quantification of energy dissipation during the complex rock-ground interaction - largely unknown. The awareness of rock shape dependence is growing, but presently, there exists little experimental basis on how rockfall hazard scales with rock mass, size, and shape. Here, we present a unique data set of induced single-block rockfall events comprising data from equant and wheel-shaped blocks with masses up to 2670 kg, quantifying the influence of rock shape and mass on lateral spreading and longitudinal runout and hence challenging common practices in rockfall hazard assessment.

18.
Brain Inform ; 8(1): 16, 2021 Aug 17.
Artigo em Inglês | MEDLINE | ID: mdl-34403011

RESUMO

Brain-inspired high-dimensional (HD) computing represents and manipulates data using very long, random vectors with dimensionality in the thousands. This representation provides great robustness for various classification tasks where classifiers operate at low signal-to-noise ratio (SNR) conditions. Similarly, hyperdimensional modulation (HDM) leverages the robustness of complex-valued HD representations to reliably transmit information over a wireless channel, achieving a similar SNR gain compared to state-of-the-art codes. Here, we first propose methods to improve HDM in two ways: (1) reducing the complexity of encoding and decoding operations by generating, manipulating, and transmitting bipolar or integer vectors instead of complex vectors; (2) increasing the SNR gain by 0.2 dB using a new soft-feedback decoder; it can also increase the additive superposition capacity of HD vectors up to 1.7[Formula: see text] in noise-free cases. Secondly, we propose to combine encoding/decoding aspects of communication with classification into a single framework by relying on multifaceted HD representations. This leads to a near-channel classification (NCC) approach that avoids transformations between different representations and the overhead of multiple layers of encoding/decoding, hence reducing latency and complexity of a wireless smart distributed system while providing robustness against noise and interference from other nodes. We provide a use-case for wearable hand gesture recognition with 5 classes from 64 EMG sensors, where the encoded vectors are transmitted to a remote node for either performing NCC, or reconstruction of the encoded data. In NCC mode, the original classification accuracy of 94% is maintained, even in the channel at SNR of 0 dB, by transmitting 10,000-bit vectors. We remove the redundancy by reducing the vector dimensionality to 2048-bit that still exhibits a graceful degradation: less than 6% accuracy loss is occurred in the channel at - 5 dB, and with the interference from 6 nodes that simultaneously transmit their encoded vectors. In the reconstruction mode, it improves the mean-squared error by up to 20 dB, compared to standard decoding, when transmitting 2048-dimensional vectors.

19.
Front Comput Neurosci ; 15: 674154, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34413731

RESUMO

In-memory computing (IMC) is a non-von Neumann paradigm that has recently established itself as a promising approach for energy-efficient, high throughput hardware for deep learning applications. One prominent application of IMC is that of performing matrix-vector multiplication in O ( 1 ) time complexity by mapping the synaptic weights of a neural-network layer to the devices of an IMC core. However, because of the significantly different pattern of execution compared to previous computational paradigms, IMC requires a rethinking of the architectural design choices made when designing deep-learning hardware. In this work, we focus on application-specific, IMC hardware for inference of Convolution Neural Networks (CNNs), and provide methodologies for implementing the various architectural components of the IMC core. Specifically, we present methods for mapping synaptic weights and activations on the memory structures and give evidence of the various trade-offs therein, such as the one between on-chip memory requirements and execution latency. Lastly, we show how to employ these methods to implement a pipelined dataflow that offers throughput and latency beyond state-of-the-art for image classification tasks.

20.
IEEE Trans Med Imaging ; 40(8): 2023-2029, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33798077

RESUMO

Wide-scale adoption of optoacoustic imaging in biology and medicine critically depends on availability of affordable scanners combining ease of operation with optimal imaging performance. Here we introduce LightSpeed: a low-cost real-time volumetric handheld optoacoustic imager based on a new compact software-defined ultrasound digital acquisition platform and a pulsed laser diode. It supports the simultaneous signal acquisition from up to 192 ultrasound channels and provides a hig-bandwidth direct optical link (2x 100G Ethernet) to the host-PC for ultra-high frame rate image acquisitions. We demonstrate use of the system for ultrafast (500Hz) 3D human angiography with a rapidly moving handheld probe. LightSpeed attained image quality comparable with a conventional optoacoustic imaging systems employing bulky acquisition electronics and a Q-switched pulsed laser. Our results thus pave the way towards a new generation of compact, affordable and high-performance optoacoustic scanners.


Assuntos
Técnicas Fotoacústicas , Angiografia , Humanos , Lasers , Software , Ultrassonografia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA