RESUMEN
Compared to traditional lower-limb prostheses (LLPs), intelligent LLPs are more versatile devices with emerging technologies, such as microcontrollers and user-controlled interfaces (UCIs). As emerging technologies allow a higher level of automation and more involvement from wearers in the LLP setting adjustments, the previous framework established to study human factors elements that affect wearer-LLP interaction may not be sufficient to understand the new elements (e.g., transparency) and dynamics in this interaction. In addition, the increased complexity of interaction amplifies the limitations of the traditional evaluation approaches of wearer-LLP interaction. Therefore, to ensure wearer acceptance and adoption, from a human factors perspective, we propose a new framework to introduce elements and usability requirements for the wearer-LLP interaction. This paper organizes human factors elements that appear with the development of intelligent LLP technologies into three aspects: wearer, device, and task by using a classic model of the human-machine systems. By adopting Nielsen's five usability requirements, we introduce learnability, efficiency, memorability, use error, and satisfaction into the evaluation of wearer-LLP interaction. We identify two types of wearer-LLP interaction. The first type, direct interaction, occurs when the wearer continuously interacts with the intelligent LLP (primarily when the LLP is in action); the second type, indirect interaction, occurs when the wearer initiates communication with the LLP usually through a UCI to address the current or foreseeable challenges. For each type of interaction, we highlight new elements, such as device transparency and prior knowledge of the wearer with the UCI. In addition, we redefine the usability goals of two types of wearer-LLP interaction with Nelson's five usability requirements and review methods to evaluate the interaction. Researchers and designers for intelligent LLPs should consider the new device elements that may additionally influence wearers' acceptance and the need to interpret findings within the constraints of the specific wearer and task characteristics. The proposed framework can also be used to organize literature and identify gaps for future directions. By adopting the holistic usability requirements, findings across empirical studies can be more comparable. At the end of this paper, we discuss research trends and future directions in the human factors design of intelligent LLPs.
Asunto(s)
Miembros Artificiales , Humanos , Extremidad Inferior/fisiología , Interfaz Usuario-Computador , Diseño de Prótesis , Sistemas Hombre-Máquina , Ergonomía , Inteligencia ArtificialRESUMEN
BACKGROUND: Sensory reafferents are crucial to correct our posture and movements, both reflexively and in a cognitively driven manner. They are also integral to developing and maintaining a sense of agency for our actions. In cases of compromised reafferents, such as for persons with amputated or congenitally missing limbs, or diseases of the peripheral and central nervous systems, augmented sensory feedback therefore has the potential for a strong, neurorehabilitative impact. We here developed an untethered vibrotactile garment that provides walking-related sensory feedback remapped non-invasively to the wearer's back. Using the so-called FeetBack system, we investigated if healthy individuals perceive synchronous remapped feedback as corresponding to their own movement (motor awareness) and how temporal delays in tactile locomotor feedback affect both motor awareness and walking characteristics (adaptation). METHODS: We designed the system to remap somatosensory information from the foot-soles of healthy participants (N = 29), using vibrotactile apparent movement, to two linear arrays of vibrators mounted ipsilaterally on the back. This mimics the translation of the centre-of-mass over each foot during stance-phase. The intervention included trials with real-time or delayed feedback, resulting in a total of 120 trials and approximately 750 step-cycles, i.e. 1500 steps, per participant. Based on previous work, experimental delays ranged from 0ms to 1500ms to include up to a full step-cycle (baseline stride-time: µ = 1144 ± 9ms, range 986-1379ms). After each trial participants were asked to report their motor awareness. RESULTS: Participants reported high correspondence between their movement and the remapped feedback for real-time trials (85 ± 3%, µ ± σ), and lowest correspondence for trials with left-right reversed feedback (22 ± 6% at 600ms delay). Participants further reported high correspondence of trials delayed by a full gait-cycle (78 ± 4% at 1200ms delay), such that the modulation of motor awareness is best expressed as a sinusoidal relationship reflecting the phase-shifts between actual and remapped tactile feedback (cos model: 38% reduction of residual sum of squares (RSS) compared to linear fit, p < 0.001). The temporal delay systematically but only moderately modulated participant stride-time in a sinusoidal fashion (3% reduction of RSS compared a linear fit, p < 0.01). CONCLUSIONS: We here demonstrate that lateralized, remapped haptic feedback modulates motor awareness in a systematic, gait-cycle dependent manner. Based on this approach, the FeetBack system was used to provide augmented sensory information pertinent to the user's on-going movement such that they reported high motor awareness for (re)synchronized feedback of their movements. While motor adaptation was limited in the current cohort of healthy participants, the next step will be to evaluate if individuals with a compromised peripheral nervous system, as well as those with conditions of the central nervous system such as Parkinson's Disease, may benefit from the FeetBack system, both for maintaining a sense of agency over their movements as well as for systematic gait-adaptation in response to the remapped, self-paced, rhythmic feedback.
Asunto(s)
Retroalimentación Sensorial , Pie , Percepción del Tacto , Humanos , Masculino , Femenino , Adulto , Retroalimentación Sensorial/fisiología , Pie/fisiología , Percepción del Tacto/fisiología , Adulto Joven , Caminata/fisiología , Vibración , Tacto/fisiologíaRESUMEN
Accurately capturing human movements is a crucial element of health status monitoring and a necessary precondition for realizing future virtual reality/augmented reality applications. Flexible motion sensors with exceptional sensitivity are capable of detecting physical activities by converting them into resistance fluctuations. Silver nanowires (AgNWs) have become a preferred choice for the development of various types of sensors due to their outstanding electrical conductivity, transparency, and flexibility within polymer composites. Herein, we present the design and fabrication of a flexible strain sensor based on silver nanowires. Suitable substrate materials were selected, and the sensor's sensitivity and fatigue properties were characterized and tested, with the sensor maintaining reliability after 5000 deformation cycles. Different sensors were prepared by controlling the concentration of silver nanowires to achieve the collection of motion signals from various parts of the human body. Additionally, we explored potential applications of these sensors in fields such as health monitoring and virtual reality. In summary, this work integrated the acquisition of different human motion signals, demonstrating great potential for future multifunctional wearable electronic devices.
Asunto(s)
Nanocables , Plata , Dispositivos Electrónicos Vestibles , Nanocables/química , Humanos , Plata/química , Movimiento/fisiología , Conductividad Eléctrica , Técnicas Biosensibles/instrumentación , Técnicas Biosensibles/métodos , Monitoreo Fisiológico/instrumentación , Monitoreo Fisiológico/métodosRESUMEN
The paradigm of Industry 5.0 pushes the transition from the traditional to a novel, smart, digital, and connected industry, where well-being is key to enhance productivity, optimize man-machine interaction and guarantee workers' safety. This work aims to conduct a systematic review of current methodologies for monitoring and analyzing physical and cognitive ergonomics. Three research questions are addressed: (1) which technologies are used to assess the physical and cognitive well-being of workers in the workplace, (2) how the acquired data are processed, and (3) what purpose this well-being is evaluated for. This way, individual factors within the holistic assessment of worker well-being are highlighted, and information is provided synthetically. The analysis was conducted following the PRISMA 2020 statement guidelines. From the sixty-five articles collected, the most adopted (1) technological solutions, (2) parameters, and (3) data analysis and processing were identified. Wearable inertial measurement units and RGB-D cameras are the most prevalent devices used for physical monitoring; in the cognitive ergonomics, and cardiac activity is the most adopted physiological parameter. Furthermore, insights on practical issues and future developments are provided. Future research should focus on developing multi-modal systems that combine these aspects with particular emphasis on their practical application in real industrial settings.
Asunto(s)
Ergonomía , Lugar de Trabajo , Humanos , Cognición/fisiología , Ergonomía/instrumentación , Industrias , Salud Laboral , Dispositivos Electrónicos Vestibles , Lugar de Trabajo/psicologíaRESUMEN
The emergence of the Metaverse is raising important questions in the field of human-machine interaction that must be addressed for a successful implementation of the new paradigm. Therefore, the exploration and integration of both technology and human interaction within this new framework are needed. This paper describes an innovative and technically viable proposal for virtual shopping in the fashion field. Virtual hands directly scanned from the real world have been integrated, after a retopology process, in a virtual environment created for the Metaverse, and have been integrated with digital nails. Human interaction with the Metaverse has been carried out through the acquisition of the real posture of the user's hands using an infrared-based sensor and mapping it in its virtualized version, achieving natural identification. The technique has been successfully tested in an immersive shopping experience with the Meta Quest 2 headset as a pilot project, where a transactions mechanism based on the blockchain technology (non-fungible tokens, NFTs) has allowed for the development of a feasible solution for massive audiences. The consumers' reactions were extremely positive, with a total of 250 in-person participants and 120 remote accesses to the Metaverse. Very interesting technical guidelines are raised in this project, the resolution of which may be useful for future implementations.
Asunto(s)
Cadena de Bloques , Mano , Humanos , Proyectos Piloto , Extremidad Superior , PosturaRESUMEN
Surface electromyography (sEMG) offers a novel method in human-machine interactions (HMIs) since it is a distinct physiological electrical signal that conceals human movement intention and muscle information. Unfortunately, the nonlinear and non-smooth features of sEMG signals often make joint angle estimation difficult. This paper proposes a joint angle prediction model for the continuous estimation of wrist motion angle changes based on sEMG signals. The proposed model combines a temporal convolutional network (TCN) with a long short-term memory (LSTM) network, where the TCN can sense local information and mine the deeper information of the sEMG signals, while LSTM, with its excellent temporal memory capability, can make up for the lack of the ability of the TCN to capture the long-term dependence of the sEMG signals, resulting in a better prediction. We validated the proposed method in the publicly available Ninapro DB1 dataset by selecting the first eight subjects and picking three types of wrist-dependent movements: wrist flexion (WF), wrist ulnar deviation (WUD), and wrist extension and closed hand (WECH). Finally, the proposed TCN-LSTM model was compared with the TCN and LSTM models. The proposed TCN-LSTM outperformed the TCN and LSTM models in terms of the root mean square error (RMSE) and average coefficient of determination (R2). The TCN-LSTM model achieved an average RMSE of 0.064, representing a 41% reduction compared to the TCN model and a 52% reduction compared to the LSTM model. The TCN-LSTM also achieved an average R2 of 0.93, indicating an 11% improvement over the TCN model and an 18% improvement over the LSTM model.
Asunto(s)
Electromiografía , Redes Neurales de la Computación , Articulación de la Muñeca , Humanos , Electromiografía/métodos , Articulación de la Muñeca/fisiología , Rango del Movimiento Articular/fisiología , Movimiento/fisiología , Procesamiento de Señales Asistido por Computador , Algoritmos , Adulto , Masculino , Muñeca/fisiologíaRESUMEN
BACKGROUND: There is a significant need to monitor human cognitive performance in complex environments, with one example being pilot performance. However, existing assessments largely focus on subjective experiences (e.g., questionnaires) and the evaluation of behavior (e.g., aircraft handling) as surrogates for cognition or utilize brainwave measures which require artificial setups (e.g., simultaneous auditory stimuli) that intrude on the primary tasks. Blink-related oscillations (BROs) are a recently discovered neural phenomenon associated with spontaneous blinking that can be captured without artificial setups and are also modulated by cognitive loading and the external sensory environment-making them ideal for brain function assessment within complex operational settings. METHODS: Electroencephalography (EEG) data were recorded from eight adult participants (five F, M = 21.1 years) while they completed the Multi-Attribute Task Battery under three different cognitive loading conditions. BRO responses in time and frequency domains were derived from the EEG data, and comparisons of BRO responses across cognitive loading conditions were undertaken. Simultaneously, assessments of blink behavior were also undertaken. RESULTS: Blink behavior assessments revealed decreasing blink rate with increasing cognitive load (p < 0.001). Prototypical BRO responses were successfully captured in all participants (p < 0.001). BRO responses reflected differences in task-induced cognitive loading in both time and frequency domains (p < 0.05). Additionally, reduced pre-blink theta band desynchronization with increasing cognitive load was also observed (p < 0.05). CONCLUSION: This study confirms the ability of BRO responses to capture cognitive loading effects as well as preparatory pre-blink cognitive processes in anticipation of the upcoming blink during a complex multitasking situation. These successful results suggest that blink-related neural processing could be a potential avenue for cognitive state evaluation in operational settings-both specialized environments such as cockpits, space exploration, military units, etc. and everyday situations such as driving, athletics, human-machine interactions, etc.-where human cognition needs to be seamlessly monitored and optimized.
Asunto(s)
Parpadeo , Ondas Encefálicas , Adulto , Humanos , Cognición/fisiología , Electroencefalografía/métodos , Ondas Encefálicas/fisiología , Encéfalo/fisiologíaRESUMEN
This theoretical article examines the concept of social support in the context of human-automation interaction, outlining several critical issues. We identified several factors that we expect to influence the consequences of social support and to what extent it is perceived as appropriate (e.g. provider possibilities, recipient expectations), notably regarding potential threats to self-esteem. We emphasise the importance of performance (including extra-role performance) as a potential outcome, whereas previous research has primarily concentrated on health and well-being. We discuss to what extent automation may provide different types of social support (e.g. emotional, instrumental), and how it differs from human support. Finally, we propose a taxonomy of automated support, arguing that source of support is not a binary concept. We conclude that more empirical work is needed to examine the multiple effects of social support for core performance indicators and extra-role performance and emphasise that there are ethical questions involved.
This theoretical article examines the role of automated social support given the increasing ability of automated systems. It concludes that it seems likely that automated systems may be perceived as supportive if they conform to pertinent criteria for design. However, empirical studies are needed to assess the impact of the complex interplay of humans and automation being involved together in the design and provision of social support.
Asunto(s)
Apoyo Social , Humanos , Automatización , Autoimagen , Sistemas Hombre-Máquina , EmocionesRESUMEN
The present study investigates the statistics and spectral content of natural vestibular stimuli experienced by healthy human subjects during three unconstrained activities. More specifically, we assessed how the characteristics of vestibular inputs are altered during the operation of a complex human-machine interface (a flight in a helicopter simulator) compared with more ecological tasks, namely a walk in an office space and a seated visual exploration task. As previously reported, we found that the power spectra of vestibular stimuli experienced during self-navigation could be modeled by two power laws but noted a potential effect of task intensity on the transition frequency between the two fits. In contrast, both tasks that required a seated position had power spectra that were better described by an inverted U shape in all planes of motion. Taken together, our results suggest that 1) walking elicits stereotyped vestibular inputs whose power spectra can be modeled by two power laws that intersect at a task intensity-dependent frequency; 2) body posture induces changes in the frequency content of vestibular information; 3) pilots tend to operate their aircraft in a way that does not generate highly nonecological vestibular stimuli; and 4) nevertheless, human-machine interfaces used as a means of manual navigation still impose some unnatural, contextual constraints on their operators.NEW & NOTEWORTHY Building upon previously published research, this study assesses and compares the vestibular stimuli experienced by healthy subjects in natural tasks and during the interaction with a complex machine: a helicopter simulator. Our results suggest the existence of an anatomical filter, meaning that body posture shapes vestibular spectral content. Our findings further indicate that operators control their machine within a constrained operating range such that they experience vestibular stimulations that are as ecological as possible.
Asunto(s)
Vestíbulo del Laberinto , Humanos , Postura , Movimiento (Física) , Aeronaves , Orientación EspacialRESUMEN
Sensing the interaction between the pilot and the control inceptors can provide important information about the pilot's activity during flight, potentially enabling the objective measurement of the pilot workload, the application of preventive actions against loss of situational awareness, and the identification of the insurgence of adverse couplings with the vehicle dynamics. This work presents an innovative pressure-sensing device developed to be seamlessly integrated into the grips of conventional aircraft control inceptors. The sensor, based on frustrated total internal reflection of light, is composed of low-cost elements and can be easily manufactured to be applicable to different hand pressure ranges. The characteristics of the sensor are first demonstrated in laboratory calibration tests. Subsequently, applications in flight simulator testing are presented, focusing on the objective representation of the pilot's instantaneous workload.
RESUMEN
This article presents the Network Empower and Prototyping Platform (NEP+), a flexible framework purposefully crafted to simplify the process of interactive application development, catering to both technical and non-technical users. The name "NEP+" encapsulates the platform's dual mission: to empower the network-related capabilities of ZeroMQ and to provide software tools and interfaces for prototyping and integration. NEP+ accomplishes this through a comprehensive quality model and an integrated software ecosystem encompassing middleware, user-friendly graphical interfaces, a command-line tool, and an accessible end-user programming interface. This article primarily focuses on presenting the proposed quality model and software architecture, illustrating how they can empower developers to craft cross-platform, accessible, and user-friendly interfaces for various applications, with a particular emphasis on robotics and the Internet of Things (IoT). Additionally, we provide practical insights into the applicability of NEP+ by briefly presenting real-world user cases where human-centered projects have successfully utilized NEP+ to develop robotics systems. To further emphasize the suitability of NEP+ tools and interfaces for developer use, we conduct a pilot study that delves into usability and workload assessment. The outcomes of this study highlight the user-friendly features of NEP+ tools, along with their ease of adoption and cross-platform capabilities. The novelty of NEP+ fundamentally lies in its holistic approach, acting as a bridge across diverse user groups, fostering inclusivity, and promoting collaboration.
Asunto(s)
Programas Informáticos , Interfaz Usuario-Computador , Humanos , Sistemas de Información , Proyectos PilotoRESUMEN
Humans' performance varies due to the mental resources that are available to successfully pursue a task. To monitor users' current cognitive resources in naturalistic scenarios, it is essential to not only measure demands induced by the task itself but also consider situational and environmental influences. We conducted a multimodal study with 18 participants (nine female, M = 25.9 with SD = 3.8 years). In this study, we recorded respiratory, ocular, cardiac, and brain activity using functional near-infrared spectroscopy (fNIRS) while participants performed an adapted version of the warship commander task with concurrent emotional speech distraction. We tested the feasibility of decoding the experienced mental effort with a multimodal machine learning architecture. The architecture comprised feature engineering, model optimisation, and model selection to combine multimodal measurements in a cross-subject classification. Our approach reduces possible overfitting and reliably distinguishes two different levels of mental effort. These findings contribute to the prediction of different states of mental effort and pave the way toward generalised state monitoring across individuals in realistic applications.
Asunto(s)
Reserva Cognitiva , Femenino , Humanos , Estudios de Factibilidad , Masculino , Adulto Joven , AdultoRESUMEN
Multimodal user interfaces promise natural and intuitive human-machine interactions. However, is the extra effort for the development of a complex multisensor system justified, or can users also be satisfied with only one input modality? This study investigates interactions in an industrial weld inspection workstation. Three unimodal interfaces, including spatial interaction with buttons augmented on a workpiece or a worktable, and speech commands, were tested individually and in a multimodal combination. Within the unimodal conditions, users preferred the augmented worktable, but overall, the interindividual usage of all input technologies in the multimodal condition was ranked best. Our findings indicate that the implementation and the use of multiple input modalities is valuable and that it is difficult to predict the usability of individual input modalities for complex systems.
Asunto(s)
Tecnología , Interfaz Usuario-Computador , Humanos , HablaRESUMEN
In the billions of faces that are shaped by thousands of different cultures and ethnicities, one thing remains universal: the way emotions are expressed. To take the next step in human-machine interactions, a machine (e.g., a humanoid robot) must be able to clarify facial emotions. Allowing systems to recognize micro-expressions affords the machine a deeper dive into a person's true feelings, which will take human emotion into account while making optimal decisions. For instance, these machines will be able to detect dangerous situations, alert caregivers to challenges, and provide appropriate responses. Micro-expressions are involuntary and transient facial expressions capable of revealing genuine emotions. We propose a new hybrid neural network (NN) model capable of micro-expression recognition in real-time applications. Several NN models are first compared in this study. Then, a hybrid NN model is created by combining a convolutional neural network (CNN), a recurrent neural network (RNN, e.g., long short-term memory (LSTM)), and a vision transformer. The CNN can extract spatial features (within a neighborhood of an image), whereas the LSTM can summarize temporal features. In addition, a transformer with an attention mechanism can capture sparse spatial relations residing in an image or between frames in a video clip. The inputs of the model are short facial videos, while the outputs are the micro-expressions recognized from the videos. The NN models are trained and tested with publicly available facial micro-expression datasets to recognize different micro-expressions (e.g., happiness, fear, anger, surprise, disgust, sadness). Score fusion and improvement metrics are also presented in our experiments. The results of our proposed models are compared with that of literature-reported methods tested on the same datasets. The proposed hybrid model performs the best, where score fusion can dramatically increase recognition performance.
Asunto(s)
Reconocimiento Facial , Humanos , Redes Neurales de la Computación , Memoria a Largo Plazo , Emociones , Miedo , Expresión FacialRESUMEN
The COVID-19 pandemic created the need for telerehabilitation development, while Industry 4.0 brought the key technology. As motor therapy often requires the physical support of a patient's motion, combining robot-aided workouts with remote control is a promising solution. This may be realised with the use of the device's digital twin, so as to give it an immersive operation. This paper presents an extensive overview of this technology's applications within the fields of industry and health. It is followed by the in-depth analysis of needs in rehabilitation based on questionnaire research and bibliography review. As a result of these sections, the original concept of controlling a rehabilitation exoskeleton via its digital twin in the virtual reality is presented. The idea is assessed in terms of benefits and significant challenges regarding its application in real life. The presented aspects prove that it may be potentially used for manual remote kinesiotherapy, combined with the safety systems predicting potentially harmful situations. The concept is universally applicable to rehabilitation robots.
Asunto(s)
COVID-19 , Dispositivo Exoesqueleto , Robótica , Telerrehabilitación , Humanos , PandemiasRESUMEN
Designing human-machine interactive systems requires cooperation between different disciplines is required. In this work, we present a Dialogue Manager and a Language Generator that are the core modules of a Voice-based Spoken Dialogue System (SDS) capable of carrying out challenging, long and complex coaching conversations. We also develop an efficient integration procedure of the whole system that will act as an intelligent and robust Virtual Coach. The coaching task significantly differs from the classical applications of SDSs, resulting in a much higher degree of complexity and difficulty. The Virtual Coach has been successfully tested and validated in a user study with independent elderly, in three different countries with three different languages and cultures: Spain, France and Norway.
Asunto(s)
Comunicación , Lenguaje , Humanos , Anciano , Sistemas Hombre-Máquina , Vehículos a Motor , FranciaRESUMEN
Microsurgical techniques have been widely utilized in various surgical specialties, such as ophthalmology, neurosurgery, and otolaryngology, which require intricate and precise surgical tool manipulation on a small scale. In microsurgery, operations on delicate vessels or tissues require high standards in surgeons' skills. This exceptionally high requirement in skills leads to a steep learning curve and lengthy training before the surgeons can perform microsurgical procedures with quality outcomes. The microsurgery robot (MSR), which can improve surgeons' operation skills through various functions, has received extensive research attention in the past three decades. There have been many review papers summarizing the research on MSR for specific surgical specialties. However, an in-depth review of the relevant technologies used in MSR systems is limited in the literature. This review details the technical challenges in microsurgery, and systematically summarizes the key technologies in MSR with a developmental perspective from the basic structural mechanism design, to the perception and human-machine interaction methods, and further to the ability in achieving a certain level of autonomy. By presenting and comparing the methods and technologies in this cutting-edge research, this paper aims to provide readers with a comprehensive understanding of the current state of MSR research and identify potential directions for future development in MSR.
Asunto(s)
Neurocirugia , Robótica , Humanos , Robótica/métodos , Microcirugia/educación , Procedimientos Neuroquirúrgicos , Competencia ClínicaRESUMEN
Adaptive human-computer systems require the recognition of human behavior states to provide real-time feedback to scaffold skill learning. These systems are being researched extensively for intervention and training in individuals with autism spectrum disorder (ASD). Autistic individuals are prone to social communication and behavioral differences that contribute to their high rate of unemployment. Teamwork training, which is beneficial for all people, can be a pivotal step in securing employment for these individuals. To broaden the reach of the training, virtual reality is a good option. However, adaptive virtual reality systems require real-time detection of behavior. Manual labeling of data is time-consuming and resource-intensive, making automated data annotation essential. In this paper, we propose a semi-supervised machine learning method to supplement manual data labeling of multimodal data in a collaborative virtual environment (CVE) used to train teamwork skills. With as little as 2.5% of the data manually labeled, the proposed semi-supervised learning model predicted labels for the remaining unlabeled data with an average accuracy of 81.3%, validating the use of semi-supervised learning to predict human behavior.
Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Realidad Virtual , Humanos , Aprendizaje Automático Supervisado , ComunicaciónRESUMEN
This article is devoted to the study of the correlation between the emotional state of a person and the posture of his or her body in the sitting position. In order to carry out the study, we developed the first version of the hardware-software system based on a posturometric armchair, allowing the characteristics of the posture of a sitting person to be evaluated using strain gauges. Using this system, we revealed the correlation between sensor readings and human emotional states. We showed that certain readings of a sensor group are formed for a certain emotional state of a person. We also found that the groups of triggered sensors, their composition, their number, and their location are related to the states of a particular person, which led to the need to build personalized digital pose models for each person. The intellectual component of our hardware-software complex is based on the concept of co-evolutionary hybrid intelligence. The system can be used during medical diagnostic procedures and rehabilitation processes, as well as in controlling people whose professional activity is connected with increased psycho-emotional load and can cause cognitive disorders, fatigue, and professional burnout and can lead to the development of diseases.
Asunto(s)
Emociones , Postura , Humanos , Masculino , Femenino , Sedestación , Computadores , Programas InformáticosRESUMEN
The article presents a novel idea of Interaction Quality Sensor (IQS), introduced in the complete solution of Hybrid INTelligence (HINT) architecture for intelligent control systems. The proposed system is designed to use and prioritize multiple information channels (speech, images, videos) in order to optimize the information flow efficiency of interaction in HMI systems. The proposed architecture is implemented and validated in a real-world application of training unskilled workers-new employees (with lower competencies and/or a language barrier). With the help of the HINT system, the man-machine communication information channels are deliberately chosen based on IQS readouts to enable an untrained, inexperienced, foreign employee candidate to become a good worker, while not requiring the presence of either an interpreter or an expert during training. The proposed implementation is in line with the labor market trend, which displays significant fluctuations. The HINT system is designed to activate human resources and support organizations/enterprises in the effective assimilation of employees to the tasks performed on the production assembly line. The market need of solving this noticeable problem was caused by a large migration of employees within (and between) enterprises. The research results presented in the work show significant benefits of the methods used, while supporting multilingualism and optimizing the preselection of information channels.