Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Front Robot AI ; 9: 815871, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35592682

RESUMO

For robots that can provide physical assistance, maintaining synchronicity of the robot and human movement is a precursor for interaction safety. Existing research on collaborative HRI does not consider how synchronicity can be affected if humans are subjected to cognitive overloading and distractions during close physical interaction. Cognitive neuroscience has shown that unexpected events during interactions not only affect action cognition but also human motor control Gentsch et al. (Cognition, 2016, 146, 81-89). If the robot is to safely adapt its trajectory to distracted human motion, quantitative changes in the human movement should be evaluated. The main contribution of this study is the analysis and quantification of disrupted human movement during a physical collaborative task that involves robot-assisted dressing. Quantifying disrupted movement is the first step in maintaining the synchronicity of the human-robot interaction. The human movement data collected from a series of experiments where participants are subjected to cognitive loading and distractions during the human-robot interaction, are projected in a 2-D latent space that efficiently represents the high-dimensionality and non-linearity of the data. The quantitative data analysis is supported by a qualitative study of user experience, using the NASA Task Load Index to measure perceived workload, and the PeRDITA questionnaire to represent the human psychological state during these interactions. In addition, we present an experimental methodology to collect interaction data in this type of human-robot collaboration that provides realism, experimental rigour and high fidelity of the human-robot interaction in the scenarios.

2.
Front Robot AI ; 8: 578596, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34671646

RESUMO

A key challenge in achieving effective robot teleoperation is minimizing teleoperators' cognitive workload and fatigue. We set out to investigate the extent to which gaze tracking data can reveal how teleoperators interact with a system. In this study, we present an analysis of gaze tracking, captured as participants completed a multi-stage task: grasping and emptying the contents of a jar into a container. The task was repeated with different combinations of visual, haptic, and verbal feedback. Our aim was to determine if teleoperation workload can be inferred by combining the gaze duration, fixation count, task completion time, and complexity of robot motion (measured as the sum of robot joint steps) at different stages of the task. Visual information of the robot workspace was captured using four cameras, positioned to capture the robot workspace from different angles. These camera views (aerial, right, eye-level, and left) were displayed through four quadrants (top-left, top-right, bottom-left, and bottom-right quadrants) of participants' video feedback computer screen, respectively. We found that the gaze duration and the fixation count were highly dependent on the stage of the task and the feedback scenario utilized. The results revealed that combining feedback modalities reduced the cognitive workload (inferred by investigating the correlation between gaze duration, fixation count, task completion time, success or failure of task completion, and robot gripper trajectories), particularly in the task stages that require more precision. There was a significant positive correlation between gaze duration and complexity of robot joint movements. Participants' gaze outside the areas of interest (distractions) was not influenced by feedback scenarios. A learning effect was observed in the use of the controller for all participants as they repeated the task with different feedback combination scenarios. To design a system for teleoperation, applicable in healthcare, we found that the analysis of teleoperators' gaze can help understand how teleoperators interact with the system, hence making it possible to develop the system from the teleoperators' stand point.

3.
Front Robot AI ; 8: 667316, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34195231

RESUMO

Hazard analysis methods such as HAZOP and STPA have proven to be effective methods for assurance of system safety for years. However, the dimensionality and human factors uncertainty of many assistive robotic applications challenges the capability of these methods to provide comprehensive coverage of safety issues from interdisciplinary perspectives in a timely and cost-effective manner. Physically assistive tasks in which a range of dynamic contexts require continuous human-robot physical interaction such as e.g., robot-assisted dressing or sit-to-stand pose a new paradigm for safe design and safety analysis methodology. For these types of tasks, considerations have to be made for a range of dynamic contexts where the robot-assistance requires close and continuous physical contact with users. Current regulations mainly cover industrial collaborative robotics regarding physical human-robot interaction (pHRI) but largely neglects direct and continuous physical human contact. In this paper, we explore limitations of commonly used safety analysis techniques when applied to robot-assisted dressing scenarios. We provide a detailed analysis of the system requirements from the user perspective and consider user-bounded hazards that can compromise safety of this complex pHRI.

4.
Front Psychol ; 11: 571961, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33071906

RESUMO

Driving cessation for some older adults can exacerbate physical, cognitive, and mental health challenges due to loss of independence and social isolation. Fully autonomous vehicles may offer an alternative transport solution, increasing social contact and encouraging independence. However, there are gaps in understanding the impact of older adults' passive role on safe human-vehicle interaction, and on their well-being. 37 older adults (mean age ± SD = 68.35 ± 8.49 years) participated in an experiment where they experienced fully autonomous journeys consisting of a distinct stop (an unexpected event versus an expected event). The autonomous behavior of the vehicle was achieved using the Wizard of Oz approach. Subjective ratings of trust and reliability, and driver state monitoring including visual attention strategies (fixation duration and count) and physiological arousal (skin conductance and heart rate), were captured during the journeys. Results revealed that subjective trust and reliability ratings were high after journeys for both types of events. During an unexpected stop, overt visual attention was allocated toward the event, whereas during an expected stop, visual attention was directed toward the human-machine interface (HMI) and distributed across the central and peripheral driving environment. Elevated skin conductance level reflecting increased arousal persisted only after the unexpected event. These results suggest that safety-critical events occurring during passive fully automated driving may narrow visual attention and elevate arousal mechanisms. To improve in-vehicle user experience for older adults, a driver state monitoring system could examine such psychophysiological indices to evaluate functional state and well-being. This information could then be used to make informed decisions on vehicle behavior and offer reassurance during elevated arousal during unexpected events.

5.
Front Robot AI ; 7: 1, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33501170

RESUMO

Emotional deception and emotional attachment are regarded as ethical concerns in human-robot interaction. Considering these concerns is essential, particularly as little is known about longitudinal effects of interactions with social robots. We ran a longitudinal user study with older adults in two retirement villages, where people interacted with a robot in a didactic setting for eight sessions over a period of 4 weeks. The robot would show either non-emotive or emotive behavior during these interactions in order to investigate emotional deception. Questionnaires were given to investigate participants' acceptance of the robot, perception of the social interactions with the robot and attachment to the robot. Results show that the robot's behavior did not seem to influence participants' acceptance of the robot, perception of the interaction or attachment to the robot. Time did not appear to influence participants' level of attachment to the robot, which ranged from low to medium. The perceived ease of using the robot significantly increased over time. These findings indicate that a robot showing emotions-and perhaps resulting in users being deceived-in a didactic setting may not by default negatively influence participants' acceptance and perception of the robot, and that older adults may not become distressed if the robot would break or be taken away from them, as attachment to the robot in this didactic setting was not high. However, more research is required as there may be other factors influencing these ethical concerns, and support through other measurements than questionnaires is required to be able to draw conclusions regarding these concerns.

6.
Sensors (Basel) ; 19(22)2019 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-31703476

RESUMO

A correction is presented to correct the section headings of Sections 5.1, 5.2, and 5.3 in[Sensors, 2017, 17, 1034].

7.
JMIR Ment Health ; 5(4): e58, 2018 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-30287415

RESUMO

BACKGROUND: SAM (Self-help for Anxiety Management) is a mobile phone app that provides self-help for anxiety management. Launched in 2013, the app has achieved over one million downloads on the iOS and Android platform app stores. Key features of the app are anxiety monitoring, self-help techniques, and social support via a mobile forum ("the Social Cloud"). This paper presents unique insights into eMental health app usage patterns and explores user behaviors and usage of self-help techniques. OBJECTIVE: The objective of our study was to investigate behavioral engagement and to establish discernible usage patterns of the app linked to the features of anxiety monitoring, ratings of self-help techniques, and social participation. METHODS: We use data mining techniques on aggregate data obtained from 105,380 registered users of the app's cloud services. RESULTS: Engagement generally conformed to common mobile participation patterns with an inverted pyramid or "funnel" of engagement of increasing intensity. We further identified 4 distinct groups of behavioral engagement differentiated by levels of activity in anxiety monitoring and social feature usage. Anxiety levels among all monitoring users were markedly reduced in the first few days of usage with some bounce back effect thereafter. A small group of users demonstrated long-term anxiety reduction (using a robust measure), typically monitored for 12-110 days, with 10-30 discrete updates and showed low levels of social participation. CONCLUSIONS: The data supported our expectation of different usage patterns, given flexible user journeys, and varying commitment in an unstructured mobile phone usage setting. We nevertheless show an aggregate trend of reduction in self-reported anxiety across all minimally-engaged users, while noting that due to the anonymized dataset, we did not have information on users also enrolled in therapy or other intervention while using the app. We find several commonalities between these app-based behavioral patterns and traditional therapy engagement.

8.
Sensors (Basel) ; 17(5)2017 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-28471405

RESUMO

The goal of this study is to address two major issues that undermine the large scale deployment of smart home sensing solutions in people's homes. These include the costs associated with having to install and maintain a large number of sensors, and the pragmatics of annotating numerous sensor data streams for activity classification. Our aim was therefore to propose a method to describe individual users' behavioural patterns starting from unannotated data analysis of a minimal number of sensors and a "blind" approach for activity recognition. The methodology included processing and analysing sensor data from 17 older adults living in community-based housing to extract activity information at different times of the day. The findings illustrate that 55 days of sensor data from a sensor configuration comprising three sensors, and extracting appropriate features including a "busyness" measure, are adequate to build robust models which can be used for clustering individuals based on their behaviour patterns with a high degree of accuracy (>85%). The obtained clusters can be used to describe individual behaviour over different times of the day. This approach suggests a scalable solution to support optimising the personalisation of care by utilising low-cost sensing and analysis. This approach could be used to track a person's needs over time and fine-tune their care plan on an ongoing basis in a cost-effective manner.


Assuntos
Aprendizado de Máquina não Supervisionado , Atividades Cotidianas , Análise por Conglomerados , Habitação , Monitorização Ambulatorial
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA