RESUMO
Wearable Artificial Intelligence-of-Things (AIoT) devices exhibit the need to be resource and energy-efficient. In this paper, we introduced a quantized multilayer perceptron (qMLP) for converting ECG signals to binary image, which can be combined with binary convolutional neural network (bCNN) for classification. We deploy our model into a low-power and low-resource field programmable gate array (FPGA) fabric. The model requires 5.8× lesser multiply and accumulate (MAC) operations than known wearable CNN models. Our model also achieves a classification accuracy of 98.5%, sensitivity of 85.4%, specificity of 99.5%, precision of 93.3%, and F1-score of 89.2%, along with dynamic power dissipation of 34.9 µW.
Assuntos
Inteligência Artificial , Dispositivos Eletrônicos Vestíveis , Algoritmos , Redes Neurais de ComputaçãoRESUMO
Wearable Artificial Intelligence-of-Things (AIoT) requires edge devices to be resource and energy-efficient. In this paper, we design and implement an efficient binary convolutional neural network (bCNN) algorithm utilizing function-merging and block-reuse techniques to classify between Ventricular and non-Ventricular Ectopic Beat images. We deploy our model into a low-resource low-power field programmable gate array (FPGA) fabric. Our model achieves a classification accuracy of 97.3%, sensitivity of 91.3%, specificity of 98.1%, precision of 86.7%, and F1-score of 88.9%, along with dynamic power dissipation of only 10.5-µW.
Assuntos
Inteligência Artificial , Complexos Ventriculares Prematuros , Conservação de Recursos Energéticos , Eletrocardiografia , Humanos , Redes Neurais de ComputaçãoRESUMO
As 5G communication technology allows for speedier access to extended information and knowledge, a more sophisticated human-machine interface beyond touchscreens and keyboards is necessary to improve the communication bandwidth and overcome the interfacing barrier. However, the full extent of human interaction beyond operation dexterity, spatial awareness, sensory feedback, and collaborative capability to be replicated completely remains a challenge. Here, we demonstrate a hybrid-flexible wearable system, consisting of simple bimodal capacitive sensors and a customized low power interface circuit integrated with machine learning algorithms, to accurately recognize complex gestures. The 16 channel sensor array extracts spatial and temporal information of the finger movement (deformation) and hand location (proximity) simultaneously. Using machine learning, over 99 and 91% accuracy are achieved for user-independent static and dynamic gesture recognition, respectively. Our approach proves that an extremely simple bimodal sensing platform that identifies local interactions and perceives spatial context concurrently, is crucial in the field of sign communication, remote robotics, and smart manufacturing.