RESUMO
Cognitive processes deal with contradictory demands in social contexts. On the one hand, social interactions imply a demand for cooperation, which requires processing social signals, and on the other, demands for selective attention require ignoring irrelevant signals, to avoid overload. We created a task with a humanoid robot displaying irrelevant social signals, imposing conflicting demands on selective attention. Participants interacted with the robot as a team (high social demand; n = 23) or a passive co-actor (low social demand; n = 19). We observed that theta oscillations indexed conflict processing of social signals. Subsequently, alpha oscillations were sensitive to the conflicting social signals and the mode of interaction. These findings suggest that brains have distinct mechanisms for dealing with the complexity of social interaction and that these mechanisms are activated differently depending on the mode of the interaction. Thus, how we process environmental stimuli depends on the beliefs held regarding our social context.
Assuntos
Atenção , Conflito Psicológico , Comportamento Cooperativo , Interação Social , Humanos , Atenção/fisiologia , Masculino , Feminino , Adulto Jovem , Adulto , Ritmo Teta/fisiologia , Ritmo alfa/fisiologia , Eletroencefalografia , Percepção Social , Relações Interpessoais , RobóticaRESUMO
This study examined children's beliefs about a humanoid robot by examining their behavioral and verbal responses. We investigated whether 3- and 5-year-old children would treat the humanoid robot gently along with other objects and tools with and without a face and whether 3- and 5-year-olds would attribute moral, perceptual, and psychological properties to these targets. Although 3-year-olds did not treat objects gently or rudely, they were likely to affirm that hitting targets was acceptable despite targets having psychological and perceptual properties. Thus, 3-year-olds' perception of the targets was incongruent with their behavior toward them. Most 5-year-olds treated a robot gently and were likely to affirm the robot's psychological characteristics. Behaviors and perceptions of the robot differed between 3- and 5-year-olds. Thus, children may start believing that robots are not alive at age five, and they can distinguish them from other objects even when the latter have faces. Developmental changes in children's animistic beliefs are also discussed.
Assuntos
Robótica , Humanos , Pré-Escolar , Masculino , Feminino , Desenvolvimento Infantil , Percepção Social , Princípios Morais , Comportamento Infantil/psicologia , Fatores Etários , CulturaRESUMO
BACKGROUND: Numerous user-related psychological dimensions can significantly influence the dynamics between humans and robots. For developers and researchers, it is crucial to have a comprehensive understanding of the psychometric properties of the available instruments used to assess these dimensions as they indicate the reliability and validity of the assessment. OBJECTIVE: This study aims to provide a systematic review of the instruments available for assessing the psychological aspects of the relationship between people and social and domestic robots, offering a summary of their psychometric properties and the quality of the evidence. METHODS: A systematic review was conducted following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines across different databases: Scopus, PubMed, and IEEE Xplore. The search strategy encompassed studies meeting the following inclusion criteria: (1) the instrument could assess psychological dimensions related to social and domestic robots, including attitudes, beliefs, opinions, feelings, and perceptions; (2) the study focused on validating the instrument; (3) the study evaluated the psychometric properties of the instrument; (4) the study underwent peer review; and (5) the study was in English. Studies focusing on industrial robots, rescue robots, or robotic arms or those primarily concerned with technology validation or measuring anthropomorphism were excluded. Independent reviewers extracted instrument properties and the methodological quality of their evidence following the Consensus-Based Standards for the Selection of Health Measurement Instruments guidelines. RESULTS: From 3828 identified records, the search strategy yielded 34 (0.89%) articles that validated and examined the psychometric properties of 27 instruments designed to assess individuals' psychological dimensions in relation to social and domestic robots. These instruments encompass a broad spectrum of psychological dimensions. While most studies predominantly focused on structural validity (24/27, 89%) and internal consistency (26/27, 96%), consideration of other psychometric properties was frequently inconsistent or absent. No instrument evaluated measurement error and responsiveness despite their significance in the clinical context. Most of the instruments (17/27, 63%) were targeted at both adults and older adults (aged ≥18 years). There was a limited number of instruments specifically designed for children, older adults, and health care contexts. CONCLUSIONS: Given the strong interest in assessing psychological dimensions in the human-robot relationship, there is a need to develop new instruments using more rigorous methodologies and consider a broader range of psychometric properties. This is essential to ensure the creation of reliable and valid measures for assessing people's psychological dimensions regarding social and domestic robots. Among its limitations, this review included instruments applicable to both social and domestic robots while excluding those for other specific types of robots (eg, industrial robots).
Assuntos
Psicometria , Robótica , Humanos , Reprodutibilidade dos TestesRESUMO
This paper proposes a novel tour guide robot, "ASAHI ReBorn", which can lead a guest by hand one-on-one while maintaining a proper distance from the guest. The robot uses a stretchable arm interface to hold the guest's hand and adjusts its speed according to the guest's pace. The robot also follows a given guide path accurately using the Robot Side method, a robot navigation method that follows a pre-defined path quickly and accurately. In addition, a control method is introduced that limits the angular velocity of the robot to avoid the robot's quick turn while guiding the guest. We evaluated the performance and usability of the proposed robot through experiments and user studies. The tour-guiding experiment revealed that the proposed method that keeps distance between the robot and the guest using the stretchable arm enables the guests to look around the exhibits compared with the condition where the robot moved at a constant velocity.
RESUMO
Many mobile robotics applications require robots to navigate around humans who may interpret the robot's motion in terms of social attitudes and intentions. It is essential to understand which aspects of the robot's motion are related to such perceptions so that we may design appropriate navigation algorithms. Current works in social navigation tend to strive towards a single ideal style of motion defined with respect to concepts such as comfort, naturalness, or legibility. These algorithms cannot be configured to alter trajectory features to control the social interpretations made by humans. In this work, we firstly present logistic regression models based on perception experiments linking human perceptions to a corpus of linear velocity profiles, establishing that various trajectory features impact human social perception of the robot. Secondly, we formulate a trajectory planning problem in the form of a constrained optimization, using novel constraints that can be selectively applied to shape the trajectory such that it generates the desired social perception. We demonstrate the ability of the proposed algorithm to accurately change each of the features of the generated trajectories based on the selected constraints, enabling subtle variations in the robot's motion to be consistently applied. By controlling the trajectories to induce different social perceptions, we provide a tool to better tailor the robot's actions to its role and deployment context to enhance acceptability.
RESUMO
As robots become increasingly common in human-populated environments, they must be perceived as social beings and behave socially. People try to preserve their own space during social interactions with others, and this space depends on a variety of factors, such as individual characteristics or their age. In real-world social spaces, there are many different types of people, and robots need to be more sensitive, especially when interacting with vulnerable subjects such as children. However, the current navigation methods do not consider these differences and apply the same avoidance strategies to everyone. Thus, we propose a new navigation framework that considers different social types and defines appropriate personal spaces for each, allowing robots to respect them. To this end, the robot needs to classify people in a real environment into social types and define the personal space for each type as a Gaussian asymmetric function to respect them. The proposed framework is validated through simulations and real-world experiments, demonstrating that the robot can improve the quality of interactions with people by providing each individual with an adaptive personal space. The proposed costmap layer is available on GitHub.
Assuntos
Robótica , Robótica/métodos , Humanos , Algoritmos , Interação SocialRESUMO
As gestures play an important role in human communication, there have been a number of service robots equipped with a pair of human-like arms for gesture-based human-robot interactions. However, the arms of most human companion robots are limited to slow and simple gestures due to the low maximum velocity of the arm actuators. In this work, we present the JF-2 robot, a mobile home service robot equipped with a pair of torque-controlled anthropomorphic arms. Thanks to the low inertia design of the arm, responsive Quasi-Direct Drive (QDD) actuators, and active compliant control of the joints, the robot can replicate fast human dance motions while being safe in the environment. In addition to the JF-2 robot, we also present the JF-mini robot, a scaled-down, low-cost version of the JF-2 robot mainly targeted for commercial use at kindergarten and childcare facilities. The suggested system is validated by performing three experiments, a safety test, teaching children how to dance along to the music, and bringing a requested item to a human subject.
Assuntos
Dança , Robótica , Robótica/métodos , Humanos , Dança/fisiologia , Gestos , Desenho de EquipamentoRESUMO
Robots are becoming an increasingly important part of our society and have started to be used in tasks that require communicating with humans. Communication can be decoupled in two dimensions: symbolic (information aimed to achieve a particular goal) and spontaneous (displaying the speaker's emotional and motivational state) communication. Thus, to enhance human-robot interactions, the expressions that are used have to convey both dimensions. This paper presents a method for modelling a robot's expressiveness as a combination of these two dimensions, where each of them can be generated independently. This is the first contribution of our work. The second contribution is the development of an expressiveness architecture that uses predefined multimodal expressions to convey the symbolic dimension and integrates a series of modulation strategies for conveying the robot's mood and emotions. In order to validate the performance of the proposed architecture, the last contribution is a series of experiments that aim to study the effect that the addition of the spontaneous dimension of communication and its fusion with the symbolic dimension has on how people perceive a social robot. Our results show that the modulation strategies improve the users' perception and can convey a recognizable affective state.
RESUMO
Interactions between mobile robots and human operators in common areas require a high level of safety, especially in terms of trajectory planning, obstacle avoidance and mutual cooperation. In this connection, the crossings of planned trajectories and their uncertainty based on model fluctuations, system noise and sensor noise play an outstanding role. This paper discusses the calculation of the expected areas of interactions during human-robot navigation with respect to fuzzy and noisy information. The expected crossing points of the possible trajectories are nonlinearly associated with the positions and orientations of the robots and humans. The nonlinear transformation of a noisy system input, such as the directions of the motion of humans and robots, to a system output, the expected area of intersection of their trajectories, is performed by two methods: statistical linearization and the sigma-point transformation. For both approaches, fuzzy approximations are presented and the inverse problem is discussed where the input distribution parameters are computed from the given output distribution parameters.
Assuntos
Algoritmos , Robótica , Robótica/métodos , Humanos , Lógica FuzzyRESUMO
This study develops a comprehensive robotic system, termed the robot cognitive system, for complex environments, integrating three models: the engagement model, the intention model, and the human-robot interaction (HRI) model. The system aims to enhance the naturalness and comfort of HRI by enabling robots to detect human behaviors, intentions, and emotions accurately. A novel dual-arm-hand mobile robot, Mobi, was designed to demonstrate the system's efficacy. The engagement model utilizes eye gaze, head pose, and action recognition to determine the suitable moment for interaction initiation, addressing potential eye contact anxiety. The intention model employs sentiment analysis and emotion classification to infer the interactor's intentions. The HRI model, integrated with Google Dialogflow, facilitates appropriate robot responses based on user feedback. The system's performance was validated in a retail environment scenario, demonstrating its potential to improve the user experience in HRIs.
Assuntos
Robótica , Humanos , Robótica/métodos , Emoções/fisiologia , Interface Usuário-Computador , Sistemas Homem-MáquinaRESUMO
This paper presents a model for generating expressive robot motions based on human expressive movements. The proposed data-driven approach combines variational autoencoders and a generative adversarial network framework to extract the essential features of human expressive motion and generate expressive robot motion accordingly. The primary objective was to transfer the underlying expressive features from human to robot motion. The input to the model consists of the robot task defined by the robot's linear velocities and angular velocities and the expressive data defined by the movement of a human body part, represented by the acceleration and angular velocity. The experimental results show that the model can effectively recognize and transfer expressive cues to the robot, producing new movements that incorporate the expressive qualities derived from the human input. Furthermore, the generated motions exhibited variability with different human inputs, highlighting the ability of the model to produce diverse outputs.
Assuntos
Robótica , Humanos , Movimento (Física) , Aceleração , Movimento , Sinais (Psicologia)RESUMO
For successful human-robot collaboration, it is crucial to establish and sustain quality interaction between humans and robots, making it essential to facilitate human-robot interaction (HRI) effectively. The evolution of robot intelligence now enables robots to take a proactive role in initiating and sustaining HRI, thereby allowing humans to concentrate more on their primary tasks. In this paper, we introduce a system known as the Robot-Facilitated Interaction System (RFIS), where mobile robots are employed to perform identification, tracking, re-identification, and gesture recognition in an integrated framework to ensure anytime readiness for HRI. We implemented the RFIS on an autonomous mobile robot used for transporting a patient, to demonstrate proactive, real-time, and user-friendly interaction with a caretaker involved in monitoring and nursing the patient. In the implementation, we focused on the efficient and robust integration of various interaction facilitation modules within a real-time HRI system that operates in an edge computing environment. Experimental results show that the RFIS, as a comprehensive system integrating caretaker recognition, tracking, re-identification, and gesture recognition, can provide an overall high quality of interaction in HRI facilitation with average accuracies exceeding 90% during real-time operations at 5 FPS.
Assuntos
Gestos , Robótica , Robótica/métodos , Humanos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Inteligência ArtificialRESUMO
Recently, research has been conducted on mixed reality (MR), which provides immersive visualization and interaction experiences, and on mapping human motions directly onto a robot in a mixed reality (MR) space to achieve a high level of immersion. However, even though the robot is mapped onto the MR space, their surrounding environment is often not mapped sufficiently; this makes it difficult to comfortably perform tasks that require precise manipulation of the objects that are difficult to see from the human perspective. Therefore, we propose a system that allows users to operate a robot in real space by mapping the task environment around the robot on the MR space and performing operations within the MR space.
RESUMO
This paper presents the results of an experiment that was designed to explore whether users assigned an ethnic identity to the Misty II robot based on the robot's voice accent, place of origin, and given name. To explore this topic a 2 × 3 within subject study was run which consisted of a humanoid robot speaking with a male or female gendered voice and using three different voice accents (Chinese, American, Mexican). Using participants who identified as American, the results indicated that users were able to identify the gender and ethnic identity of the Misty II robot with a high degree of accuracy based on a minimum set of social cues. However, the version of Misty II presenting with an American ethnicity was more accurately identified than a robot presenting with cues signaling a Mexican or Chinese ethnicity. Implications of the results for the design of human-robot interfaces are discussed.
Assuntos
Etnicidade , Robótica , Voz , Feminino , Humanos , Masculino , Adulto Jovem , Voz/fisiologiaRESUMO
During the learning of a new sensorimotor task, individuals are usually provided with instructional stimuli and relevant information about the target task. The inclusion of haptic devices in the study of this kind of learning has greatly helped in the understanding of how an individual can improve or acquire new skills. However, the way in which the information and stimuli are delivered has not been extensively explored. We have designed a challenging task with nonintuitive visuomotor perturbation that allows us to apply and compare different motor strategies to study the teaching process and to avoid the interference of previous knowledge present in the naïve subjects. Three subject groups participated in our experiment, where the learning by repetition without assistance, learning by repetition with assistance, and task Segmentation Learning techniques were performed with a haptic robot. Our results show that all the groups were able to successfully complete the task and that the subjects' performance during training and evaluation was not affected by modifying the teaching strategy. Nevertheless, our results indicate that the presented task design is useful for the study of sensorimotor teaching and that the presented metrics are suitable for exploring the evolution of the accuracy and precision during learning.
Assuntos
Aprendizagem , Robótica , Humanos , Robótica/métodos , Algoritmos , Destreza MotoraRESUMO
The paradigm of Industry 5.0 pushes the transition from the traditional to a novel, smart, digital, and connected industry, where well-being is key to enhance productivity, optimize man-machine interaction and guarantee workers' safety. This work aims to conduct a systematic review of current methodologies for monitoring and analyzing physical and cognitive ergonomics. Three research questions are addressed: (1) which technologies are used to assess the physical and cognitive well-being of workers in the workplace, (2) how the acquired data are processed, and (3) what purpose this well-being is evaluated for. This way, individual factors within the holistic assessment of worker well-being are highlighted, and information is provided synthetically. The analysis was conducted following the PRISMA 2020 statement guidelines. From the sixty-five articles collected, the most adopted (1) technological solutions, (2) parameters, and (3) data analysis and processing were identified. Wearable inertial measurement units and RGB-D cameras are the most prevalent devices used for physical monitoring; in the cognitive ergonomics, and cardiac activity is the most adopted physiological parameter. Furthermore, insights on practical issues and future developments are provided. Future research should focus on developing multi-modal systems that combine these aspects with particular emphasis on their practical application in real industrial settings.
Assuntos
Ergonomia , Local de Trabalho , Humanos , Cognição/fisiologia , Ergonomia/instrumentação , Indústrias , Saúde Ocupacional , Dispositivos Eletrônicos Vestíveis , Local de Trabalho/psicologiaRESUMO
This study evaluates an innovative control approach to assistive robotics by integrating brain-computer interface (BCI) technology and eye tracking into a shared control system for a mobile augmented reality user interface. Aimed at enhancing the autonomy of individuals with physical disabilities, particularly those with impaired motor function due to conditions such as stroke, the system utilizes BCI to interpret user intentions from electroencephalography signals and eye tracking to identify the object of focus, thus refining control commands. This integration seeks to create a more intuitive and responsive assistive robot control strategy. The real-world usability was evaluated, demonstrating significant potential to improve autonomy for individuals with severe motor impairments. The control system was compared with an eye-tracking-based alternative to identify areas needing improvement. Although BCI achieved an acceptable success rate of 0.83 in the final phase, eye tracking was more effective with a perfect success rate and consistently lower completion times (p<0.001). The user experience responses favored eye tracking in 11 out of 26 questions, with no significant differences in the remaining questions, and subjective fatigue was higher with BCI use (p=0.04). While BCI performance lagged behind eye tracking, the user evaluation supports the validity of our control strategy, showing that it could be deployed in real-world conditions and suggesting a pathway for further advancements.
Assuntos
Realidade Aumentada , Interfaces Cérebro-Computador , Eletroencefalografia , Tecnologia de Rastreamento Ocular , Robótica , Interface Usuário-Computador , Humanos , Robótica/métodos , Robótica/instrumentação , Eletroencefalografia/métodos , Masculino , Feminino , Adulto , Pessoa de Meia-Idade , Adulto Jovem , Movimentos Oculares/fisiologiaRESUMO
The current study investigated the effectiveness of social robots in facilitating stress management interventions for university students by evaluating their physiological responses. We collected electroencephalogram (EEG) brain activity and Galvanic Skin Responses (GSRs) together with self-reported questionnaires from two groups of students who practiced a deep breathing exercise either with a social robot or a laptop. From GSR signals, we obtained the change in participants' arousal level throughout the intervention, and from the EEG signals, we extracted the change in their emotional valence using the neurometric of Frontal Alpha Asymmetry (FAA). While subjective perceptions of stress and user experience did not differ significantly between the two groups, the physiological signals revealed differences in their emotional responses as evaluated by the arousal-valence model. The Laptop group tended to show a decrease in arousal level which, in some cases, was accompanied by negative valence indicative of boredom or lack of interest. On the other hand, the Robot group displayed two patterns; some demonstrated a decrease in arousal with positive valence indicative of calmness and relaxation, and others showed an increase in arousal together with positive valence interpreted as excitement. These findings provide interesting insights into the impact of social robots as mental well-being coaches on students' emotions particularly in the presence of the novelty effect. Additionally, they provide evidence for the efficacy of physiological signals as an objective and reliable measure of user experience in HRI settings.
Assuntos
Eletroencefalografia , Emoções , Resposta Galvânica da Pele , Saúde Mental , Robótica , Estresse Psicológico , Humanos , Robótica/métodos , Masculino , Feminino , Emoções/fisiologia , Eletroencefalografia/métodos , Estresse Psicológico/terapia , Estresse Psicológico/fisiopatologia , Resposta Galvânica da Pele/fisiologia , Adulto Jovem , Adulto , Inquéritos e Questionários , Nível de Alerta/fisiologia , Estudantes/psicologiaRESUMO
Using lower limb exoskeletons provides potential advantages in terms of productivity and safety associated with reduced stress. However, complex issues in human-robot interactions are still open, such as the physiological effects of exoskeletons and the impact on the user's subjective experience. In this work, an innovative exoskeleton, the Wearable Walker, is assessed using the EXPERIENCE benchmarking protocol from the EUROBENCH project. The Wearable Walker is a lower-limb exoskeleton that enhances human abilities, such as carrying loads. The device uses a unique control approach called Blend Control that provides smooth assistance torques. It operates two models simultaneously, one in the case in which the left foot is grounded and another for the grounded right foot. These models generate assistive torques combined to provide continuous and smooth overall assistance, preventing any abrupt changes in torque due to model switching. The EXPERIENCE protocol consists of walking on flat ground while gathering physiological signals, such as heart rate, its variability, respiration rate, and galvanic skin response, and completing a questionnaire. The test was performed with five healthy subjects. The scope of the present study is twofold: to evaluate the specific exoskeleton and its current control system to gain insight into possible improvements and to present a case study for a formal and replicable benchmarking of wearable robots.
Assuntos
Exoesqueleto Energizado , Extremidade Inferior , Caminhada , Dispositivos Eletrônicos Vestíveis , Humanos , Extremidade Inferior/fisiologia , Caminhada/fisiologia , Masculino , Adulto , Robótica/instrumentação , Feminino , Andadores , Desenho de Equipamento , TorqueRESUMO
OBJECTIVE: The purpose of this study is to identify the potential biomechanical and cognitive workload effects induced by human robot collaborative pollination task, how additional cues and reliability of the robot influence these effects and whether interacting with the robot influences the participant's anxiety and attitude towards robots. BACKGROUND: Human-Robot Collaboration (HRC) could be used to alleviate pollinator shortages and robot performance issues. However, the effects of HRC for this setting have not been investigated. METHODS: Sixteen participants were recruited. Four HRC modes, no cue, with cue, unreliable, and manual control were included. Three categories of dependent variables were measured: (1) spine kinematics (L5/S1, L1/T12, and T1/C7), (2) pupillary activation data, and (3) subjective measures such as perceived workload, robot-related anxiety, and negative attitudes towards robotics. RESULTS: HRC reduced anxiety towards the cobot, decreased joint angles and angular velocity for the L5/S1 and L1/T12 joints, and reduced pupil dilation, with the "with cue" mode producing the lowest values. However, unreliability was detrimental to these gains. In addition, HRC resulted in a higher flexion angle for the neck (i.e., T1/C7). CONCLUSION: HRC reduced the physical and mental workload during the simulated pollination task. Benefits of the additional cue were minimal compared to no cues. The increased joint angle in the neck and unreliability affecting lower and mid back joint angles and workload requires further investigation. APPLICATION: These findings could be used to inform design decisions for HRC frameworks for agricultural applications that are cognizant of the different effects induced by HRC.