Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Appl Ergon ; 96: 103510, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34161853

ABSTRACT

While researchers have explored benefits of adding augmented reality graphics to vehicle displays, the impact of graphic characteristics have not been well researched. In this paper, we consider the impact of augmented reality graphic spatial location and motion, as well as turn direction, traffic presence, and gender, on participant driving and glance behavior and preferences. Twenty-two participants navigated through a simulated environment while using four different graphics. We employed a novel glance allocation analysis to differentiate information likely gathered with each glace with more granularity. Fixed graphics generally resulted in less visual attention and more time scanning for hazards than animated graphics. Finally, the screen-fixed graphic was preferred by participants over all world-relative graphics, suggesting that graphic spatially integration into the world may not always be necessary in visually complex urban environments like those considered in this study.


Subject(s)
Augmented Reality , Automobile Driving , Humans , Motion
2.
Appl Ergon ; 82: 102969, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31600714

ABSTRACT

Partially automated vehicles present interface design challenges in ensuring the driver remains alert should the vehicle need to hand back control at short notice, but without exposing the driver to cognitive overload. To date, little is known about driver expectations of partial driving automation and whether this affects the information they require inside the vehicle. Twenty-five participants were presented with five partially automated driving events in a driving simulator. After each event, a semi-structured interview was conducted. The interview data was coded and analysed using grounded theory. From the results, two groupings of driver expectations were identified: High Information Preference (HIP) and Low Information Preference (LIP) drivers; between these two groups the information preferences differed. LIP drivers did not want detailed information about the vehicle presented to them, but the definition of partial automation means that this kind of information is required for safe use. Hence, the results suggest careful thought as to how information is presented to them is required in order for LIP drivers to safely using partial driving automation. Conversely, HIP drivers wanted detailed information about the system's status and driving and were found to be more willing to work with the partial automation and its current limitations. It was evident that the drivers' expectations of the partial automation capability differed, and this affected their information preferences. Hence this study suggests that HMI designers must account for these differing expectations and preferences to create a safe, usable system that works for everyone.


Subject(s)
Automation , Automobile Driving/psychology , Automobiles , Spatial Navigation , Adolescent , Adult , Attention , Computer Simulation , Equipment Design , Female , Humans , Male , Reaction Time
3.
Appl Ergon ; 78: 184-196, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31046950

ABSTRACT

Touchscreen Human-Machine Interfaces (HMIs) are a well-established and popular choice to provide the primary control interface between driver and vehicle, yet inherently demand some visual attention. Employing a secondary device with the touchscreen may reduce the demand but there is some debate about which device is most suitable, with current manufacturers favouring different solutions and applying these internationally. We present an empirical driving simulator study, conducted in the UK and China, in which 48 participants undertook typical in-vehicle tasks utilising either a touchscreen, rotary-controller, steering-wheel-controls or touchpad. In both the UK and China, the touchscreen was the most preferred/least demanding to use, and the touchpad least preferred/most demanding, whereas the rotary-controller was generally favoured by UK drivers and steering-wheel-controls were more popular in China. Chinese drivers were more excited by the novelty of the technology, and spent more time attending to the devices while driving, leading to an increase in off-road glance time and a corresponding detriment to vehicle control. Even so, Chinese drivers rated devices as easier-to-use while driving, and felt that they interfered less with their driving performance, compared to their UK counterparts. Results suggest that the most effective solution (to maximise performance/acceptance, while minimising visual demand) is to maintain the touchscreen as the primary control interface (e.g. for top-level tasks), and supplement this with a secondary device that is only enabled for certain actions; moreover, different devices may be employed in different cultural markets. Further work is required to explore these recommendations in greater depth (e.g. during extended or real-world testing), and to validate the findings and approach in other cultural contexts.


Subject(s)
Automobile Driving , Automobiles , Consumer Behavior , User-Computer Interface , Adult , Arousal , China , Computer Simulation , Cross-Cultural Comparison , Equipment Design , Female , Humans , Male , Man-Machine Systems , Middle Aged , Pleasure , Surveys and Questionnaires , Task Performance and Analysis , United Kingdom , Workload , Young Adult
4.
Sensors (Basel) ; 17(11)2017 Nov 22.
Article in English | MEDLINE | ID: mdl-29165331

ABSTRACT

Although at present legislation does not allow drivers in a Level 3 autonomous vehicle to engage in a secondary task, there may become a time when it does. Monitoring the behaviour of drivers engaging in various non-driving activities (NDAs) is crucial to decide how well the driver will be able to take over control of the vehicle. One limitation of the commonly used face-based head tracking system, using cameras, is that sufficient features of the face must be visible, which limits the detectable angle of head movement and thereby measurable NDAs, unless multiple cameras are used. This paper proposes a novel orientation sensor based head tracking system that includes twin devices, one of which measures the movement of the vehicle while the other measures the absolute movement of the head. Measurement error in the shaking and nodding axes were less than 0.4°, while error in the rolling axis was less than 2°. Comparison with a camera-based system, through in-house tests and on-road tests, showed that the main advantage of the proposed system is the ability to detect angles larger than 20° in the shaking and nodding axes. Finally, a case study demonstrated that the measurement of the shaking and nodding angles, produced from the proposed system, can effectively characterise the drivers' behaviour while engaged in the NDAs of chatting to a passenger and playing on a smartphone.


Subject(s)
Automobile Driving , Accidents, Traffic , Attention , Face , Head Movements , Humans , Smartphone
5.
Appl Ergon ; 63: 53-61, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28502406

ABSTRACT

Given the proliferation of 'intelligent' and 'socially-aware' digital assistants embodying everyday mobile technology - and the undeniable logic that utilising voice-activated controls and interfaces in cars reduces the visual and manual distraction of interacting with in-vehicle devices - it appears inevitable that next generation vehicles will be embodied by digital assistants and utilise spoken language as a method of interaction. From a design perspective, defining the language and interaction style that a digital driving assistant should adopt is contingent on the role that they play within the social fabric and context in which they are situated. We therefore conducted a qualitative, Wizard-of-Oz study to explore how drivers might interact linguistically with a natural language digital driving assistant. Twenty-five participants drove for 10 min in a medium-fidelity driving simulator while interacting with a state-of-the-art, high-functioning, conversational digital driving assistant. All exchanges were transcribed and analysed using recognised linguistic techniques, such as discourse and conversation analysis, normally reserved for interpersonal investigation. Language usage patterns demonstrate that interactions with the digital assistant were fundamentally social in nature, with participants affording the assistant equal social status and high-level cognitive processing capability. For example, participants were polite, actively controlled turn-taking during the conversation, and used back-channelling, fillers and hesitation, as they might in human communication. Furthermore, participants expected the digital assistant to understand and process complex requests mitigated with hedging words and expressions, and peppered with vague language and deictic references requiring shared contextual information and mutual understanding. Findings are presented in six themes which emerged during the analysis - formulating responses; turn-taking; back-channelling, fillers and hesitation; vague language; mitigating requests and politeness and praise. The results can be used to inform the design of future in-vehicle natural language systems, in particular to help manage the tension between designing for an engaging dialogue (important for technology acceptance) and designing for an effective dialogue (important to minimise distraction in a driving context).


Subject(s)
Automobile Driving/psychology , Language , Linguistics , Man-Machine Systems , User-Computer Interface , Adult , Communication , Computer Simulation , Female , Humans , Male , Middle Aged
6.
IEEE Trans Cybern ; 46(4): 878-89, 2016 Apr.
Article in English | MEDLINE | ID: mdl-25935053

ABSTRACT

Using interactive displays, such as a touchscreen, in vehicles typically requires dedicating a considerable amount of visual as well as cognitive capacity and undertaking a hand pointing gesture to select the intended item on the interface. This can act as a distractor from the primary task of driving and consequently can have serious safety implications. Due to road and driving conditions, the user input can also be highly perturbed resulting in erroneous selections compromising the system usability. In this paper, we propose intent-aware displays that utilize a pointing gesture tracker in conjunction with suitable Bayesian destination inference algorithms to determine the item the user intends to select, which can be achieved with high confidence remarkably early in the pointing gesture. This can drastically reduce the time and effort required to successfully complete an in-vehicle selection task. In the proposed probabilistic inference framework, the likelihood of all the nominal destinations is sequentially calculated by modeling the hand pointing gesture movements as a destination-reverting process. This leads to a Kalman filter-type implementation of the prediction routine that requires minimal parameter training and has low computational burden; it is also amenable to parallelization. The substantial gains obtained using an intent-aware display are demonstrated using data collected in an instrumented vehicle driven under various road conditions.

SELECTION OF CITATIONS
SEARCH DETAIL