Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.944
Filtrar
1.
Sensors (Basel) ; 24(11)2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38894102

RESUMEN

This study develops a comprehensive robotic system, termed the robot cognitive system, for complex environments, integrating three models: the engagement model, the intention model, and the human-robot interaction (HRI) model. The system aims to enhance the naturalness and comfort of HRI by enabling robots to detect human behaviors, intentions, and emotions accurately. A novel dual-arm-hand mobile robot, Mobi, was designed to demonstrate the system's efficacy. The engagement model utilizes eye gaze, head pose, and action recognition to determine the suitable moment for interaction initiation, addressing potential eye contact anxiety. The intention model employs sentiment analysis and emotion classification to infer the interactor's intentions. The HRI model, integrated with Google Dialogflow, facilitates appropriate robot responses based on user feedback. The system's performance was validated in a retail environment scenario, demonstrating its potential to improve the user experience in HRIs.


Asunto(s)
Robótica , Humanos , Robótica/métodos , Emociones/fisiología , Interfaz Usuario-Computador , Sistemas Hombre-Máquina
2.
Sci Rep ; 14(1): 13579, 2024 06 12.
Artículo en Inglés | MEDLINE | ID: mdl-38866827

RESUMEN

The concept of an innovative human-machine interface and interaction modes based on virtual and augmented reality technologies for airport control towers has been developed with the aim of increasing the human performances and situational awareness of air traffic control operators. By presenting digital information through see-through head-mounted displays superimposed over the out-of-the-tower view, the proposed interface should stimulate controllers to operate in a head-up position and, therefore, reduce the number of switches between a head-up and a head-down position even in low visibility conditions. This paper introduces the developed interface and describes the exercises conducted to validate the technical solutions developed, focusing on the simulation platform and exploited technologies, to demonstrate how virtual and augmented reality, along with additional features such as adaptive human-machine interface, multimodal interaction and attention guidance, enable a more natural and effective interaction in the control tower. The results of the human-in-the-loop real-time validation exercises show that the prototype concept is feasible from both an operational and technical perspective, the solution proves to support the air traffic controllers in working in a head-up position more than head-down even with low-visibility operational scenarios, and to lower the time to react in critical or alerting situations with a positive impact on the human performances of the user. While showcasing promising results, this study also identifies certain limitations and opportunities for refinement, aimed at further optimising the efficacy and usability of the proposed interface.


Asunto(s)
Aeropuertos , Realidad Aumentada , Sistemas Hombre-Máquina , Interfaz Usuario-Computador , Humanos , Realidad Virtual , Aviación
3.
ACS Appl Mater Interfaces ; 16(25): 32784-32793, 2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-38862273

RESUMEN

The key feature that enables soft sensors to shorten the performance gap between robots and biological structures is their deformability, coupled with their capability to measure physical changes. Robots equipped with these sensors can interact safely and proprioceptively with their environments. This has sparked interest in developing novel sensors with high stretchability for application in human-robot interactions. This study presents a novel ultrasoft optoelectronic segmented sensor design capable of measuring strains exceeding 500%. The sensor features an ultrastretchable segment physically joined with an asymmetrically configured soft proprioceptive segment. This configuration enables it to measure high strain and to detect both the magnitude and direction of bending. Although the sensor cannot decouple these types of deformations, it can sense prescribed motions that combine stretching and bending. The proposed sensor was applied to a highly deformable scissor mechanism and a human-robot interface (HRI) device for a robotic arm, capable of quantifying parameters in complex interactions. The results from the experiments also demonstrate the potential of the proposed segmented sensor concept when used in tandem with machine learning, affording new dimensions of proprioception to robots during multilayered interactions with humans.


Asunto(s)
Robótica , Humanos , Robótica/instrumentación , Sistemas Hombre-Máquina , Diseño de Equipo , Aprendizaje Automático
4.
Accid Anal Prev ; 205: 107687, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38943983

RESUMEN

Autonomous driving technology has the potential to significantly reduce the number of traffic accidents. However, before achieving full automation, drivers still need to take control of the vehicle in complex and diverse scenarios that the autonomous driving system cannot handle. Therefore, appropriate takeover request (TOR) designs are necessary to enhance takeover performance and driving safety. This study focuses on takeover tasks in hazard scenarios with varied hazard visibility, which can be categorized as overt hazards and covert hazards. Through ergonomic experiments, the impact of TOR interface visual information, including takeover warning, hazard direction, and time to collision, on takeover performance is investigated, and specific analyses are conducted using eye-tracking data. The following conclusions are drawn from the experiments: (1) The visibility of hazards significantly affects takeover performance. (2) Providing more TOR visual information in hazards with different visibility has varying effects on drivers' visual attention allocation but can improve takeover performance. (3) More TOR visual information helps reduce takeover workload and increase human-machine trust. Based on these findings, this paper proposes the following TOR visual interface design strategies: (1) In overt hazard scenarios, only takeover warning is necessary, as additional visual information may distract drivers' attention. (2) In covert hazard scenarios, the TOR visual interface should better assist drivers in understanding the current hazard situation by providing information on hazard direction and time to collision to enhance takeover performance.


Asunto(s)
Accidentes de Tránsito , Atención , Automatización , Conducción de Automóvil , Humanos , Masculino , Accidentes de Tránsito/prevención & control , Adulto , Femenino , Adulto Joven , Tecnología de Seguimiento Ocular , Seguridad , Ergonomía , Sistemas Hombre-Máquina , Movimientos Oculares , Percepción Visual , Interfaz Usuario-Computador , Confianza
5.
ISA Trans ; 150: 262-277, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38749885

RESUMEN

Teleoperation under human guidance has become an effective solution to extend human's reach in various environments. However, the teleoperation system still faces challenges of insufficient sense of both visual and haptic feedback from remote environments, which results in the inadequate guidance for the operator. In this paper, a visual/haptic integrated perception and reconstruction system (VHI-PRS) is developed to provide the operator with 3D visual information and effective haptic guidance. Specifically, a visual-based telepresence augmentation method is proposed to provide the operator with virtual-reality combined visual telepresence, where the real point cloud model is directly superimposed on virtual manipulator to avoid the time-consuming process of mesh model rendering. With the utilization of visual information, a haptic-based telepresence augmentation method is proposed to provide the operator with comprehensive force feedback, including the virtual guiding force, virtual repulsive force and real-time interactive force, which greatly helps reduce the workload of operator. Finally, a user study on grab-place task is carried out to verify the effectiveness of proposed system.


Asunto(s)
Interfaz Usuario-Computador , Realidad Virtual , Humanos , Algoritmos , Industrias , Tacto , Masculino , Adulto , Simulación por Computador , Robótica , Sistemas Hombre-Máquina , Retroalimentación , Adulto Joven
6.
Ergonomics ; 67(6): 866-880, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38770836

RESUMEN

By conducting a mixed-design experiment using simplified accident handling tasks performed by two-person teams, this study examined the effects of automation function and condition (before, during, and after malfunction) on human performance. Five different and non-overlapping functions related to human information processing model were considered and their malfunctions were set in a first-failure way. The results showed that while the automation malfunction impaired task performance, the performance degradation for information analysis was more severe than response planning. Contrary to other functions, the situation awareness for response planning and response implementation tended to increase during malfunctioning and decrease after. In addition, decreased task performance reduced trust in automation, and malfunctions in earlier stages of information processing resulted in lower trust. Suggestions provided for the design and training related to automation emphasise the importance of high-level cognitive support and the benefit of involving automation error handling in training.


The effects of automation function and malfunction on human performance are important for design and training. The experimental results in this study revealed the significance of high-level cognitive support. Also, introducing automation error handling in training can be helpful in improving situation awareness of the teams.


Asunto(s)
Automatización , Análisis y Desempeño de Tareas , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Sistemas Hombre-Máquina , Confianza , Concienciación
7.
Sci Robot ; 9(90): eadk5183, 2024 May 29.
Artículo en Inglés | MEDLINE | ID: mdl-38809995

RESUMEN

The advancement of motor augmentation and the broader domain of human-machine interaction rely on a seamless integration with users' physical and cognitive capabilities. These considerations may markedly fluctuate among individuals on the basis of their age, form, and abilities. There is a need to develop a standard for considering these diversity needs and preferences to guide technological development, and large-scale testing can provide us with evidence for such considerations. Public engagement events provide an important opportunity to build a bidirectional discourse with potential users for the codevelopment of inclusive and accessible technologies. We exhibited the Third Thumb, a hand augmentation device, at a public engagement event and tested participants from the general public, who are often not involved in such early technological development of wearable robotic technology. We focused on wearability (fit and control), ability to successfully operate the device, and ability levels across diversity factors relevant for physical technologies (gender, handedness, and age). Our inclusive design was successful in 99.3% of our diverse sample of 596 individuals tested (age range from 3 to 96 years). Ninety-eight percent of participants were further able to successfully manipulate objects using the extra thumb during the first minute of use, with no significant influences of gender, handedness, or affinity for hobbies involving the hands. Performance was generally poorer among younger children (aged ≤11 years). Although older and younger adults performed the task comparably, we identified age costs with the older adults. Our findings offer tangible demonstration of the initial usability of the Third Thumb for a broad demographic.


Asunto(s)
Mano , Robótica , Humanos , Femenino , Masculino , Adulto , Anciano , Adolescente , Persona de Mediana Edad , Adulto Joven , Niño , Mano/fisiología , Anciano de 80 o más Años , Preescolar , Robótica/instrumentación , Diseño de Equipo , Sistemas Hombre-Máquina , Dispositivos Electrónicos Vestibles , Pulgar
8.
Accid Anal Prev ; 203: 107621, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38729056

RESUMEN

The emerging connected vehicle (CV) technologies facilitate the development of integrated advanced driver assistance systems (ADASs), with which various functions are coordinated in a comprehensive framework. However, challenges arise in enabling drivers to perceive important information with minimal distractions when multiple messages are simultaneously provided by integrated ADASs. To this end, this study introduces three types of human-machine interfaces (HMIs) for an integrated ADAS: 1) three messages using a visual display only, 2) four messages using a visual display only, and 3) three messages using visual plus auditory displays. Meanwhile, the differences in driving performance across three HMI types are examined to investigate the impacts of information quantity and display formats on driving behaviors. Additionally, variations in drivers' responses to the three HMI types are examined. Driving behaviors of 51 drivers with respect to three HMI types are investigated in eight field testing scenarios. These scenarios include warnings for rear-end collision, lateral collision, forward collision, lane-change, and curve speed, as well as notifications for emergency events downstream, the specified speed limit, and car-following behaviors. Results indicate that, compared to a visual display only, presenting three messages through visual and auditory displays enhances driving performance in four typical scenarios. Compared to the presentation of three messages, a visual display offering four messages improves driving performance in rear-end collision warning scenarios but diminishes the performance in lane-change scenarios. Additionally, the relationship between information quantity and display formats shown on HMIs and driving performance can be moderated by drivers' gender, occupation, driving experience, annual driving distance, and safety attitudes. Findings are indicative to designers in automotive industries in developing HMIs for future CVs.


Asunto(s)
Accidentes de Tránsito , Conducción de Automóvil , Humanos , Conducción de Automóvil/psicología , Masculino , Femenino , Adulto , Accidentes de Tránsito/prevención & control , Adulto Joven , Interfaz Usuario-Computador , Sistemas Hombre-Máquina , Automóviles , Persona de Mediana Edad , Presentación de Datos
9.
Accid Anal Prev ; 203: 107606, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38733810

RESUMEN

The effectiveness of the human-machine interface (HMI) in a driving automation system during takeover situations is based, in part, on its design. Past research has indicated that modality, specificity, and timing of the HMI have an impact on driver behavior. The objective of this study was to examine the effectiveness of two HMIs, which vary by modality, specificity, and timing, on drivers' takeover time, performance, and eye glance behavior. Drivers' behavior was examined in a driving simulator study with different levels of automation, varying traffic conditions, and while completing a non-driving related task. Results indicated that HMI type had a statistically significant effect on velocity and off-road eye glances such that those who were exposed to an HMI that gave multimodal warnings with greater specificity exhibited better performance. There were no effects of HMI on acceleration, lane position, or other eye glance metrics (e.g., on road glance duration). Future work should disentangle HMI design further to determine exactly which aspects of design yield between safety critical behavior.


Asunto(s)
Automatización , Conducción de Automóvil , Sistemas Hombre-Máquina , Interfaz Usuario-Computador , Humanos , Conducción de Automóvil/psicología , Masculino , Adulto , Femenino , Adulto Joven , Simulación por Computador , Automóviles , Movimientos Oculares , Factores de Tiempo , Adolescente , Análisis y Desempeño de Tareas
10.
Sci Rep ; 14(1): 12410, 2024 05 30.
Artículo en Inglés | MEDLINE | ID: mdl-38811749

RESUMEN

As robots become increasingly integrated into social economic interactions, it becomes crucial to understand how people perceive a robot's mind. It has been argued that minds are perceived along two dimensions: experience, i.e., the ability to feel, and agency, i.e., the ability to act and take responsibility for one's actions. However, the influence of these perceived dimensions on human-machine interactions, particularly those involving altruism and trust, remains unknown. We hypothesize that the perception of experience influences altruism, while the perception of agency influences trust. To test these hypotheses, we pair participants with bot partners in a dictator game (to measure altruism) and a trust game (to measure trust) while varying the bots' perceived experience and agency, either by manipulating the degree to which the bot resembles humans, or by manipulating the description of the bots' ability to feel and exercise self-control. The results demonstrate that the money transferred in the dictator game is influenced by the perceived experience, while the money transferred in the trust game is influenced by the perceived agency, thereby confirming our hypotheses. More broadly, our findings support the specificity of the mind hypothesis: Perceptions of different dimensions of the mind lead to different kinds of social behavior.


Asunto(s)
Altruismo , Percepción , Confianza , Humanos , Confianza/psicología , Masculino , Femenino , Adulto , Adulto Joven , Robótica , Juegos Experimentales , Sistemas Hombre-Máquina
11.
Appl Ergon ; 119: 104306, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38714102

RESUMEN

The 5.0 industry promotes collaborative robots (cobots). This research studies the impacts of cobot collaboration using an experimental setup. 120 participants realized a simple and a complex assembly task. 50% collaborated with another human (H/H) and 50% with a cobot (H/C). The workload and the acceptability of the cobotic collaboration were measured. Working with a cobot decreases the effect of the task complexity on the human workload and on the output quality. However, it increases the time completion and the number of gestures (while decreasing their frequency). The H/C couples have a higher chance of success but they take more time and more gestures to realize the task. The results of this research could help developers and stakeholders to understand the impacts of implementing a cobot in production chains.


Asunto(s)
Conducta Cooperativa , Gestos , Robótica , Análisis y Desempeño de Tareas , Carga de Trabajo , Humanos , Carga de Trabajo/psicología , Masculino , Femenino , Adulto , Adulto Joven , Sistemas Hombre-Máquina , Factores de Tiempo
12.
Sensors (Basel) ; 24(9)2024 Apr 28.
Artículo en Inglés | MEDLINE | ID: mdl-38732923

RESUMEN

The transition to Industry 4.0 and 5.0 underscores the need for integrating humans into manufacturing processes, shifting the focus towards customization and personalization rather than traditional mass production. However, human performance during task execution may vary. To ensure high human-robot teaming (HRT) performance, it is crucial to predict performance without negatively affecting task execution. Therefore, to predict performance indirectly, significant factors affecting human performance, such as engagement and task load (i.e., amount of cognitive, physical, and/or sensory resources required to perform a particular task), must be considered. Hence, we propose a framework to predict and maximize the HRT performance. For the prediction of task performance during the development phase, our methodology employs features extracted from physiological data as inputs. The labels for these predictions-categorized as accurate performance or inaccurate performance due to high/low task load-are meticulously crafted using a combination of the NASA TLX questionnaire, records of human performance in quality control tasks, and the application of Q-Learning to derive task-specific weights for the task load indices. This structured approach enables the deployment of our model to exclusively rely on physiological data for predicting performance, thereby achieving an accuracy rate of 95.45% in forecasting HRT performance. To maintain optimized HRT performance, this study further introduces a method of dynamically adjusting the robot's speed in the case of low performance. This strategic adjustment is designed to effectively balance the task load, thereby enhancing the efficiency of human-robot collaboration.


Asunto(s)
Robótica , Análisis y Desempeño de Tareas , Humanos , Robótica/métodos , Femenino , Masculino , Análisis de Datos , Sistemas Hombre-Máquina , Adulto , Carga de Trabajo
13.
Artículo en Inglés | MEDLINE | ID: mdl-38739518

RESUMEN

The employment of surface electromyographic (sEMG) signals in the estimation of hand kinematics represents a promising non-invasive methodology for the advancement of human-machine interfaces. However, the limitations of existing subject-specific methods are obvious as they confine the application to individual models that are custom-tailored for specific subjects, thereby reducing the potential for broader applicability. In addition, current cross-subject methods are challenged in their ability to simultaneously cater to the needs of both new and existing users effectively. To overcome these challenges, we propose the Cross-Subject Lifelong Network (CSLN). CSLN incorporates a novel lifelong learning approach, maintaining the patterns of sEMG signals across a varied user population and across different temporal scales. Our method enhances the generalization of acquired patterns, making it applicable to various individuals and temporal contexts. Our experimental investigations, encompassing both joint and sequential training approaches, demonstrate that the CSLN model not only attains enhanced performance in cross-subject scenarios but also effectively addresses the issue of catastrophic forgetting, thereby augmenting training efficacy.


Asunto(s)
Algoritmos , Electromiografía , Mano , Humanos , Electromiografía/métodos , Mano/fisiología , Fenómenos Biomecánicos , Masculino , Adulto , Aprendizaje/fisiología , Femenino , Sistemas Hombre-Máquina , Aprendizaje Automático , Adulto Joven , Redes Neurales de la Computación , Músculo Esquelético/fisiología
14.
Nat Commun ; 15(1): 3588, 2024 Apr 27.
Artículo en Inglés | MEDLINE | ID: mdl-38678013

RESUMEN

Eye tracking techniques enable high-efficient, natural, and effortless human-machine interaction by detecting users' eye movements and decoding their attention and intentions. Here, a miniature, imperceptible, and biocompatible smart contact lens is proposed for in situ eye tracking and wireless eye-machine interaction. Employing the frequency encoding strategy, the chip-free and battery-free lens successes in detecting eye movement and closure. Using a time-sequential eye tracking algorithm, the lens has a great angular accuracy of <0.5°, which is even less than the vision range of central fovea. Multiple eye-machine interaction applications, such as eye-drawing, Gluttonous Snake game, web interaction, pan-tilt-zoom camera control, and robot vehicle control, are demonstrated on the eye movement model and in vivo rabbit. Furthermore, comprehensive biocompatibility tests are implemented, demonstrating low cytotoxicity and low eye irritation. Thus, the contact lens is expected to enrich approaches of eye tracking techniques and promote the development of human-machine interaction technology.


Asunto(s)
Algoritmos , Lentes de Contacto , Movimientos Oculares , Tecnología de Seguimiento Ocular , Movimientos Oculares/fisiología , Animales , Humanos , Conejos , Sistemas Hombre-Máquina
15.
Appl Ergon ; 118: 104288, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38636348

RESUMEN

Humans working in modern work systems are increasingly required to supervise task automation. We examined whether manual aircraft conflict detection skill predicted participants' ability to respond to conflict detection automation failures in simulated air traffic control. In a conflict discrimination task (to assess manual skill), participants determined whether pairs of aircraft were in conflict or not by judging their relative-arrival time at common intersection points. Then in a simulated air traffic control task, participants supervised automation which either partially or fully detected and resolved conflicts on their behalf. Automation supervision required participants to detect when automation may have failed and effectively intervene. When automation failed, participants who had better manual conflict detection skill were faster and more accurate to intervene. However, a substantial proportion of variance in failure intervention was not explained by manual conflict detection skill, potentially reflecting that future research should consider other cognitive skills underlying automation supervision.


Asunto(s)
Automatización , Aviación , Sistemas Hombre-Máquina , Análisis y Desempeño de Tareas , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Aeronaves , Selección de Personal/métodos
16.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(2): 295-303, 2024 Apr 25.
Artículo en Chino | MEDLINE | ID: mdl-38686410

RESUMEN

Aiming at the status of muscle and joint damage caused on surgeons keeping surgical posture for a long time, this paper designs a medical multi-position auxiliary support exoskeleton with multi-joint mechanism by analyzing the surgical postures and conducting conformational studies on different joints respectively. Then by establishing a human-machine static model, this study obtains the joint torque and joint force before and after the human body wears the exoskeleton, and calibrates the strength of the exoskeleton with finite element analysis software. The results show that the maximum stress of the exoskeleton is less than the material strength requirements, the overall deformation is small, and the structural strength of the exoskeleton meets the use requirements. Finally, in this study, subjects were selected to participate in the plantar pressure test and biomechanical simulation with the man-machine static model, and the results were analyzed in terms of plantar pressure, joint torque and joint force, muscle force and overall muscle metabolism to assess the exoskeleton support performance. The results show that the exoskeleton has better support for the whole body and can reduce the musculoskeletal burden. The exoskeleton mechanism in this study better matches the actual working needs of surgeons and provides a new paradigm for the design of medical support exoskeleton mechanism.


Asunto(s)
Diseño de Equipo , Dispositivo Exoesqueleto , Postura , Humanos , Fenómenos Biomecánicos , Análisis de Elementos Finitos , Torque , Músculo Esquelético/fisiología , Articulaciones/fisiología , Sistemas Hombre-Máquina
18.
Accid Anal Prev ; 202: 107567, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38669901

RESUMEN

How autonomous vehicles (AVs) communicate their intentions to vulnerable road users (e.g., pedestrians) is a concern given the rapid growth and adoption of this technology. At present, little is known about how children respond to external Human Machine Interface (eHMI) signals from AVs. The current study examined how adults and children respond to the combination of explicit (eHMI signals) and implicit information (vehicle deceleration) to guide their road-crossing decisions. Children (8- to 12-year-olds) and adults made decisions about when to cross in front of a driverless car in an immersive virtual environment. The car sometimes stopped, either abruptly or gradually (manipulated within subjects), to allow participants to cross. When yielding, the car communicated its intent via a dome light that changed from red to green and varied in its timing onset (manipulated between subjects): early eHMI onset, late eHMI onset, or control (no eHMI). As expected, we found that both children and adults waited longer to enter the roadway when vehicles decelerated abruptly than gradually. However, adults responded to the early eHMI signal by crossing sooner when the cars decelerated either gradually or abruptly compared to the control condition. Children were heavily influenced by the late eHMI signal, crossing later when the eHMI signal appeared late and the vehicle decelerated either gradually or abruptly compared to the control condition. Unlike adults, children in the control condition behaved similarly to children in the early eHMI condition by crossing before the yielding vehicle came to a stop. Together, these findings suggest that early eHMI onset may lead to riskier behavior (initiating crossing well before a gradually decelerating vehicle comes to a stop), whereas late eHMI onset may lead to safer behavior (waiting for the eHMI signal to appear before initiating crossing). Without an eHMI signal, children show a concerning overreliance on gradual vehicle deceleration to judge yielding intent.


Asunto(s)
Automóviles , Toma de Decisiones , Peatones , Humanos , Niño , Masculino , Peatones/psicología , Femenino , Adulto , Fenómenos Biomecánicos , Desaceleración , Adulto Joven , Conducción de Automóvil/psicología , Accidentes de Tránsito/prevención & control , Factores de Tiempo , Realidad Virtual , Sistemas Hombre-Máquina
19.
IEEE J Biomed Health Inform ; 28(5): 2723-2732, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38442056

RESUMEN

Myoelectric prostheses are generally unable to accurately control the position and force simultaneously, prohibiting natural and intuitive human-machine interaction. This issue is attributed to the limitations of myoelectric interfaces in effectively decoding multi-degree-of-freedom (multi-DoF) kinematic and kinetic information. We thus propose a novel multi-task, spatial-temporal model driven by graphical high-density electromyography (HD-EMG) for simultaneous and proportional control of wrist angle and grasp force. Twelve subjects were recruited to perform three multi-DoF movements, including wrist pronation/supination, wrist flexion/extension, and wrist abduction/adduction while varying grasp force. Experimental results demonstrated that the proposed model outperformed five baseline models, with the normalized root mean square error of 13.2% and 9.7% and the correlation coefficient of 89.6% and 91.9% for wrist angle and grasp force estimation, respectively. In addition, the proposed model still maintained comparable accuracy even with a significant reduction in the number of HD-EMG electrodes. To the best of our knowledge, this is the first study to achieve simultaneous and proportional wrist angle and grasp force control via HD-EMG and has the potential to empower prostheses users to perform a broader range of tasks with greater precision and control, ultimately enhancing their independence and quality of life.


Asunto(s)
Gráficos por Computador , Electrodos , Electromiografía , Fuerza de la Mano , Redes Neurales de la Computación , Prótesis e Implantes , Muñeca , Adulto , Humanos , Adulto Joven , Fenómenos Biomecánicos/fisiología , Correlación de Datos , Visualización de Datos , Electromiografía/instrumentación , Electromiografía/métodos , Fuerza de la Mano/fisiología , Sistemas Hombre-Máquina , Muñeca/fisiología , Aprendizaje Profundo , Análisis de Datos , Movimiento
20.
IISE Trans Occup Ergon Hum Factors ; 12(1-2): 123-134, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38498062

RESUMEN

OCCUPATIONAL APPLICATIONS"Overassistive" robots can adversely impact long-term human-robot collaboration in the workplace, leading to risks of worker complacency, reduced workforce skill sets, and diminished situational awareness. Ergonomics practitioners should thus be cautious about solely targeting widely adopted metrics for improving human-robot collaboration, such as user trust and comfort. By contrast, introducing variability and adaptation into a collaborative robot's behavior could prove vital in preventing the negative consequences of overreliance and overtrust in an autonomous partner. This work reported here explored how instilling variability into physical human-robot collaboration can have a measurably positive effect on ergonomics in a repetitive task. A review of principles related to this notion of "stimulating" robot behavior is also provided to further inform ergonomics practitioners of existing human-robot collaboration frameworks.


Background: Collaborative robots, or cobots, are becoming ubiquitous in occupational settings due to benefits that include improved worker safety and increased productivity. Existing research on human-robot collaboration in industry has made progress in enhancing workers' psychophysical states, by optimizing measures of ergonomics risk factors, such as human posture, comfort, and cognitive workload. However, short-term objectives for robotic assistance may conflict with the worker's long-term preferences, needs, and overall wellbeing.Purpose: To investigate the ergonomic advantages and disadvantages of employing a collaborative robotics framework that intentionally imposes variability in the robot's behavior to stimulate the human partner's psychophysical state.Methods: A review of "overassistance" within human-robot collaboration and methods of addressing this phenomenon via adaptive automation. In adaptive approaches, the robot assistance may even challenge the user to better achieve a long-term objective while partially conflicting with their short-term task goals. Common themes across these approaches were extracted to motivate and support the proposed idea of stimulating robot behavior in physical human-robot collaboration.Results: Experimental evidence to justify stimulating robot behavior is presented through a human-robot handover study. A robot handover policy that regularly injects variability into the object transfer location led to significantly larger dynamics in the torso rotations and center of mass of human receivers compared to an "overassistive" policy that constrains receiver motion. Crucially, the stimulating handover policy also generated improvements in widely used ergonomics risk indicators of human posture.Conclusions: Our findings underscore the potential ergonomic benefits of a cobot's actions imposing variability in a user's responsive behavior, rather than indirectly restricting human behavior by optimizing the immediate task objective. Therefore, a transition from cobot policies that optimize instantaneous measures of ergonomics to those that continuously engage users could hold promise for human-robot collaboration in occupational settings characterized by repeated interactions.


Asunto(s)
Ergonomía , Robótica , Humanos , Robótica/métodos , Ergonomía/métodos , Sistemas Hombre-Máquina , Conducta Cooperativa , Movimiento (Física)
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...