Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 11 de 11
Filtrer
Plus de filtres










Base de données
Gamme d'année
1.
Int J Hum Comput Interact ; 40(10): 2433-2452, 2024.
Article de Anglais | MEDLINE | ID: mdl-38784821

RÉSUMÉ

While the needs and care for children on the autism spectrum have been widely investigated, the intervention and services available to autistic adults have been overlooked for a long time. This survey paper reviewed 32 articles that described and evaluated assistive technologies that have been developed and evaluated through a complete circle of interactive product design from ideation, prototype, and user evaluation. These assistive technologies aim to improve independence and living quality in autistic adults. We extracted information from the perspective of requirement gathering, technology designing, and effectiveness of evaluation in the design cycle. We found a general lack of requirements-driven design, and the evaluation process was not standardized either. The lack of requirement gathering results in designs purely based on existing literature without targeting actual user needs. Our synthesis of included paper contributes to developing iterative design considerations in assistive technologies for autistic adults. We also suggest that assistive technologies for autistic adults shift some attention from assisting only autistic adults who require at least substantial support to embracing also those who have been living independently but rather have difficulties in social interaction. Assistive technologies for them have the potentials to help them consolidate and enhance their experiences in independent living.

2.
Front Psychol ; 14: 1121622, 2023.
Article de Anglais | MEDLINE | ID: mdl-37275735

RÉSUMÉ

Trust is critical for human-automation collaboration, especially under safety-critical tasks such as driving. Providing explainable information on how the automation system reaches decisions and predictions can improve system transparency, which is believed to further facilitate driver trust and user evaluation of the automated vehicles. However, what the optimal level of transparency is and how the system communicates it to calibrate drivers' trust and improve their driving performance remain uncertain. Such uncertainty becomes even more unpredictable given that the system reliability remains dynamic due to current technological limitations. To address this issue in conditionally automated vehicles, a total of 30 participants were recruited in a driving simulator study and assigned to either a low or a high system reliability condition. They experienced two driving scenarios accompanied by two types of in-vehicle agents delivering information with different transparency types: "what"-then-wait (on-demand) and "what + why" (proactive). The on-demand agent provided some information about the upcoming event and delivered more information if prompted by the driver, whereas the proactive agent provided all information at once. Results indicated that the on-demand agent was more habitable, or naturalistic, to drivers and was perceived with faster system response speed compared to the proactive agent. Drivers under the high-reliability condition complied with the takeover request (TOR) more (if the agent was on-demand) and had shorter takeover times (in both agent conditions) compared to those under the low-reliability condition. These findings inspire how the automation system can deliver information to improve system transparency while adapting to system reliability and user evaluation, which further contributes to driver trust calibration and performance correction in future automated vehicles.

3.
Front Psychol ; 14: 1129294, 2023.
Article de Anglais | MEDLINE | ID: mdl-36998376

RÉSUMÉ

The advancement of Conditionally Automated Vehicles (CAVs) requires research into critical factors to achieve an optimal interaction between drivers and vehicles. The present study investigated the impact of driver emotions and in-vehicle agent (IVA) reliability on drivers' perceptions, trust, perceived workload, situation awareness (SA), and driving performance toward a Level 3 automated vehicle system. Two humanoid robots acted as the in-vehicle intelligent agents to guide and communicate with the drivers during the experiment. Forty-eight college students participated in the driving simulator study. The participants each experienced a 12-min writing task to induce their designated emotion (happy, angry, or neutral) prior to the driving task. Their affective states were measured before the induction, after the induction, and after the experiment by completing an emotion assessment questionnaire. During the driving scenarios, IVAs informed the participants about five upcoming driving events and three of them asked for the participants to take over control. Participants' SA and takeover driving performance were measured during driving; in addition, participants reported their subjective judgment ratings, trust, and perceived workload (NASA-TLX) toward the Level 3 automated vehicle system after each driving scenario. The results suggested that there was an interaction between emotions and agent reliability contributing to the part of affective trust and the jerk rate in takeover performance. Participants in the happy and high reliability conditions were shown to have a higher affective trust and a lower jerk rate than other emotions in the low reliability condition; however, no significant difference was found in the cognitive trust and other driving performance measures. We suggested that affective trust can be achieved only when both conditions met, including drivers' happy emotion and high reliability. Happy participants also perceived more physical demand than angry and neutral participants. Our results indicated that trust depends on driver emotional states interacting with reliability of the system, which suggested future research and design should consider the impact of driver emotions and system reliability on automated vehicles.

4.
Appl Ergon ; 106: 103912, 2023 Jan.
Article de Anglais | MEDLINE | ID: mdl-36179543

RÉSUMÉ

Even though the rail industry has made great strides in reducing accidents at crossings, train-vehicle collisions at Highway-Rail Grade Crossings (HRGCs) continue to be a major issue in the US and across the world. In this research, we conducted a driving simulator study (N = 35) to evaluate a hybrid in-vehicle auditory alert (IVAA), composed of both speech and non-speech components, that was selected after two rounds of subjective evaluation studies. Participants drove through a simulated scenario and reacted to HRGCs with and without the IVAA present and through different music conditions and crossing devices. Driver simulator testing results showed that the inclusion of the hybrid IVAA significantly improved driving behavior near HRGCs in terms of gaze behavior, braking reaction, and approach speed to the crossing. The driving simulator study also showed the effects of background music and warning device types on driving performance. The study contributes to the large-scale implementation of IVAAs at HRGCs, as well as the development of guidelines toward a more standardized approach for IVAAs at HRGCs.


Sujet(s)
Conduite automobile , Voies ferrées , Humains , Accidents de la route/prévention et contrôle
6.
Int J Hum Comput Interact ; 37(3): 249-266, 2021.
Article de Anglais | MEDLINE | ID: mdl-33767571

RÉSUMÉ

Using robots in therapy for children on the autism spectrum is a promising avenue for child-robot interaction, and one that has garnered significant interest from the research community. After preliminary interviews with stakeholders and evaluating music selections, twelve typically developing (TD) children and three children with Autism Spectrum Disorder (ASD) participated in an experiment where they played the dance freeze game to four songs in partnership with either a NAO robot or a human partner. Overall, there were significant differences between TD children and children with ASD (e.g., mimicry, dance quality, & game play). There were mixed results for TD children, but they tended to show greater engagement with the researcher. However, objective results for children with ASD showed greater attention and engagement while dancing with the robot. There was little difference in game performance between partners or songs for either group. However, upbeat music did encourage greater movement than calm music. Using a robot in a musical dance game for children with ASD appears to show the advantages and potential just as in previous research efforts. Implications and future research are discussed with the results.

7.
Article de Anglais | MEDLINE | ID: mdl-33829148

RÉSUMÉ

The diagnosis of Autism Spectrum Disorder (ASD) in children is commonly accompanied by a diagnosis of sensory processing disorders. Abnormalities are usually reported in multiple sensory processing domains, showing a higher prevalence of unusual responses, particularly to tactile, auditory and visual stimuli. This paper discusses a novel robot-based framework designed to target sensory difficulties faced by children with ASD in a controlled setting. The setup consists of a number of sensory stations, together with two different robotic agents that navigate the stations and interact with the stimuli. These stimuli are designed to resemble real world scenarios that form a common part of one's everyday experiences. Given the strong interest of children with ASD in technology in general and robots in particular, we attempt to utilize our robotic platform to demonstrate socially acceptable responses to the stimuli in an interactive, pedagogical setting that encourages the child's social, motor and vocal skills, while providing a diverse sensory experience. A preliminary user study was conducted to evaluate the efficacy of the proposed framework, with a total of 18 participants (5 with ASD and 13 typically developing) between the ages of 4 and 12 years. We derive a measure of social engagement, based on which we evaluate the effectiveness of the robots and sensory stations in order to identify key design features that can improve social engagement in children.

8.
Appl Sci (Basel) ; 8(2)2018 Feb.
Article de Anglais | MEDLINE | ID: mdl-35582004

RÉSUMÉ

Imitation is a powerful component of communication between people, and it poses an important implication in improving the quality of interaction in the field of human-robot interaction (HRI). This paper discusses a novel framework designed to improve human-robot interaction through robotic imitation of a participant's gestures. In our experiment, a humanoid robotic agent socializes with and plays games with a participant. For the experimental group, the robot additionally imitates one of the participant's novel gestures during a play session. We hypothesize that the robot's use of imitation will increase the participant's openness towards engaging with the robot. Experimental results from a user study of 12 subjects show that post-imitation, experimental subjects displayed a more positive emotional state, had higher instances of mood contagion towards the robot, and interpreted the robot to have a higher level of autonomy than their control group counterparts did. These results point to an increased participant interest in engagement fueled by personalized imitation during interaction.

9.
Proc Hum Factors Ergon Soc Annu Meet ; 61(1): 808-812, 2017 Sep.
Article de Anglais | MEDLINE | ID: mdl-34880592

RÉSUMÉ

Experimenters need robots that are easier to control for experimental purposes. In this paper, we conducted interviews for eliciting interaction requirements for human-robot interaction scenarios. User input was then incorporated into an Android application for remotely controlling an Aldebaran Nao robot for use in Wizard-of-Oz experiments and demos. The app was used in a usability study to compare it with an existing Nao remote control app. Results were positive, highlighting the ease-of-use and organization of the app. Future work includes a more complete usability trial evaluating the unique functionality of the app, as well as a case study of the app in a real Wizard-of-Oz experiment.

10.
Appl Ergon ; 50: 185-99, 2015 Sep.
Article de Anglais | MEDLINE | ID: mdl-25959334

RÉSUMÉ

Research has suggested that interaction with an in-vehicle software agent can improve a driver's psychological state and increase road safety. The present study explored the possibility of using an in-vehicle software agent to mitigate effects of driver anger on driving behavior. After either anger or neutral mood induction, 60 undergraduates drove in a simulator with two types of agent intervention. Results showed that both speech-based agents not only enhance driver situation awareness and driving performance, but also reduce their anger level and perceived workload. Regression models show that a driver's anger influences driving performance measures, mediated by situation awareness. The practical implications include design guidelines for the design of social interaction with in-vehicle software agents.


Sujet(s)
Colère , Conduite automobile/psychologie , Relations interpersonnelles , Charge de travail/psychologie , Conduite automobile/normes , Automobiles , Conscience immédiate , Simulation numérique , Femelle , Humains , Mâle , Jeune adulte
11.
Hum Factors ; 55(1): 157-82, 2013 Feb.
Article de Anglais | MEDLINE | ID: mdl-23516800

RÉSUMÉ

OBJECTIVE: The goal of this project is to evaluate a new auditory cue, which the authors call spearcons, in comparison to other auditory cues with the aim of improving auditory menu navigation. BACKGROUND: With the shrinking displays of mobile devices and increasing technology use by visually impaired users, it becomes important to improve usability of non-graphical user interface (GUI) interfaces such as auditory menus. Using nonspeech sounds called auditory icons (i.e., representative real sounds of objects or events) or earcons (i.e., brief musical melody patterns) has been proposed to enhance menu navigation. To compensate for the weaknesses of traditional nonspeech auditory cues, the authors developed spearcons by speeding up a spoken phrase, even to the point where it is no longer recognized as speech. METHOD: The authors conducted five empirical experiments. In Experiments 1 and 2, they measured menu navigation efficiency and accuracy among cues. In Experiments 3 and 4, they evaluated learning rate of cues and speech itself. In Experiment 5, they assessed spearcon enhancements compared to plain TTS (text to speech: speak out written menu items) in a two-dimensional auditory menu. RESULTS: Spearcons outperformed traditional and newer hybrid auditory cues in navigation efficiency, accuracy, and learning rate. Moreover, spearcons showed comparable learnability as normal speech and led to better performance than speech-only auditory cues in two-dimensional menu navigation. CONCLUSION: These results show that spearcons can be more effective than previous auditory cues in menu-based interfaces. APPLICATION: Spearcons have broadened the taxonomy of nonspeech auditory cues. Users can benefit from the application of spearcons in real devices.


Sujet(s)
Stimulation acoustique/méthodes , Perception auditive , Téléphones portables/tendances , Ordinateurs de poche/tendances , Interface utilisateur , Adolescent , Analyse de variance , Téléphones portables/instrumentation , Signaux , Affichage de données , Femelle , Humains , Mâle , Son (physique) , Parole , Jeune adulte
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE
...