RESUMO
OBJECTIVE: The present study investigated how pupil size and heart rate variability (HRV) can contribute to the prediction of operator performance. We illustrate how focusing on mental effort as the conceptual link between physiological measures and task performance can align relevant empirical findings across research domains. BACKGROUND: Physiological measures are often treated as indicators of operators' mental state. Thereby, they could enable a continuous and unobtrusive assessment of operators' current ability to perform the task. METHOD: Fifty participants performed a process monitoring task consisting of ten 9-minute task blocks. Blocks alternated between low and high task demands, and the last two blocks introduced a task reward manipulation. We measured response times as primary performance indicator, pupil size and HRV as physiological measures, and mental fatigue, task engagement, and perceived effort as subjective ratings. RESULTS: Both increased pupil size and increased HRV significantly predicted better task performance. However, the underlying associations between physiological measures and performance were influenced by task demands and time on task. Pupil size, but not HRV, results were consistent with subjective ratings. CONCLUSION: The empirical findings suggest that, by capturing variance in operators' mental effort, physiological measures, specifically pupil size, can contribute to the prediction of task performance. Their predictive value is limited by confounding effects that alter the amount of effort required to achieve a given level of performance. APPLICATION: The outlined conceptual approach and empirical results can guide study designs and performance prediction models that examine physiological measures as the basis for dynamic operator assistance.
RESUMO
Training costs for operators of robotic arms in forestry and construction are high. A systematic analysis of skill development can help to make training more efficient. This research focuses on motor skill development by investigating the bimanual control of a four-DoF robotic arm. The two-time scale power law of learning was used to identify difficulties in control learning. Ten participants acquired the control of the robotic arm in a simulator over ten sessions within seven weeks. Eight movement targets were presented in each of six blocks per session, comprising 432 robotic arm movements. The results suggest that learning varies for each joystick axis, with control of the elbow joint showing the highest learning gain. The base and shoulder joints showed similar learning gains. The wrist joint showed mixed results in terms of use or disuse. Performance increased with retention, suggesting that a longer period of consolidation aided skill acquisition.Practitioner summary: A shortage of skilled operators, costly, and extensive training of heavy machine operators in robotic arm control requires to revisit control skill learning. This study showed that focus of training ought to be shifted to specific joints and training requires to emphasise longer resting periods between training sessions.
RESUMO
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Humanos , Pesquisa EmpíricaRESUMO
Objective This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality. Background Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one's capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings. Method Forty-eight participants performed a visuomotor tracking task while auditory stimuli were presented: a frequent pure tone, an infrequent pure tone, and infrequent environmental sounds. Participants were required either to respond to the presentation of the infrequent pure tone (auditory task-relevant) or not (auditory task-irrelevant). We recorded and compared the event-related potentials (ERPs) that were generated by environmental sounds, which were always task-irrelevant for both groups. These ERPs served as an index for our participants' awareness of the task-irrelevant auditory scene. Results Manipulation of auditory task relevance influenced the brain's response to task-irrelevant environmental sounds. Specifically, the late novelty-P3 to irrelevant environmental sounds, which underlies working memory updating, was found to be selectively enhanced by auditory task relevance independent of visuomotor workload. Conclusion Task irrelevance in the auditory modality selectively reduces our brain's responses to unexpected and irrelevant sounds regardless of visuomotor workload. Application Presenting relevant auditory information more often could mitigate the risk of inattentional deafness.
Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Potenciais Evocados/fisiologia , Atividade Motora/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Visual/fisiologia , Adulto , Eletroencefalografia , Potenciais Evocados P300/fisiologia , HumanosRESUMO
Before initiating a saccade to a moving target, the brain must take into account the target's eccentricity as well as its movement direction and speed. We tested how the kinematic characteristics of the target influence the time course of this oculomotor response. Participants performed a step-ramp task in which the target object stepped from a central to an eccentric position and moved at constant velocity either to the fixation position (foveopetal) or further to the periphery (foveofugal). The step size and target speed were varied. Of particular interest were trials that exhibited an initial saccade prior to a smooth pursuit eye movement. Measured saccade reaction times were longer in the foveopetal than in the foveofugal condition. In the foveopetal (but not the foveofugal) condition, the occurrence of an initial saccade, its reaction time as well as the strength of the pre-saccadic pursuit response depended on both the target's speed and the step size. A common explanation for these results may be found in the neural mechanisms that select between oculomotor response alternatives, i.e., a saccadic or smooth response.
Assuntos
Encéfalo/fisiologia , Lateralidade Funcional/fisiologia , Acompanhamento Ocular Uniforme/fisiologia , Tempo de Reação/fisiologia , Movimentos Sacádicos/fisiologia , Adulto , Análise de Variância , Fenômenos Biomecânicos , Feminino , Humanos , Masculino , Percepção de Movimento/fisiologia , Estimulação Física , Adulto JovemRESUMO
We investigate how smooth pursuit eye movements affect the latencies of task-switching saccades. Participants had to alternate their foveal vision between a continuous pursuit task in the display center and a discrete object discrimination task in the periphery. The pursuit task was either carried out by following the target with the eyes only (ocular) or by steering an on-screen cursor with a joystick (oculomanual). We measured participants' saccadic reaction times (SRTs) when foveal vision was shifted from the pursuit task to the discrimination task and back to the pursuit task. Our results show asymmetries in SRTs depending on the movement direction of the pursuit target: SRTs were generally shorter in the direction of pursuit. Specifically, SRTs from the pursuit target were shorter when the discrimination object appeared in the motion direction. SRTs to pursuit were shorter when the pursuit target moved away from the current fixation location. This result was independent of the type of smooth pursuit behavior that was performed by participants (ocular/oculomanual). The effects are discussed in regard to asymmetries in attention and processes that suppress saccades at the onset of pursuit.
Assuntos
Desempenho Psicomotor/fisiologia , Tempo de Reação/fisiologia , Movimentos Sacádicos/fisiologia , Adulto , Análise de Variância , Atenção/fisiologia , Interpretação Estatística de Dados , Discriminação Psicológica/fisiologia , Feminino , Fixação Ocular , Humanos , Masculino , Estimulação Luminosa , Adulto JovemRESUMO
By orienting attention, auditory cues can improve the discrimination of spatially congruent visual targets. Looming sounds that increase in intensity are processed preferentially by the brain. Thus, we investigated whether auditory looming cues can orient visuo-spatial attention more effectively than static and receding sounds. Specifically, different auditory cues could redirect attention away from a continuous central visuo-motor tracking task to peripheral visual targets that appeared occasionally. To investigate the time course of crossmodal cuing, Experiment 1 presented visual targets at different time-points across a 500 ms auditory cue's presentation. No benefits were found for simultaneous audio-visual cue-target presentation. The largest crossmodal benefit occurred at early cue-target asynchrony onsets (i.e., CTOA = 250 ms), regardless of auditory cue type, which diminished at CTOA = 500 ms for static and receding cues. However, auditory looming cues showed a late crossmodal cuing benefit at CTOA = 500 ms. Experiment 2 showed that this late auditory looming cue benefit was independent of the cue's intensity when the visual target appeared. Thus, we conclude that the late crossmodal benefit throughout an auditory looming cue's presentation is due to its increasing intensity profile. The neural basis for this benefit and its ecological implications are discussed.
Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Tempo de Reação/fisiologia , Percepção Espacial/fisiologia , Estimulação Acústica , Feminino , Humanos , Masculino , Orientação Espacial/fisiologia , Estimulação LuminosaRESUMO
There is a growing interest in eye tracking technologies applied to support traditional visualization techniques like diagrams, charts, maps, or plots, either static, animated, or interactive ones. More complex data analyses are required to derive knowledge and meaning from the data. Eye tracking systems serve that purpose in combination with biological and computer vision, cognition, perception, visualization, human-computer-interaction, as well as usability and user experience research. The 10 articles collected in this thematic special issue provide interesting examples how sophisticated methods of data analysis and representation enable researchers to discover and describe fundamental spatio-temporal regularities in the data. The human visual system, supported by appropriate visualization tools, enables the human operator to solve complex tasks, like understanding and interpreting three-dimensional medical images, controlling air traffic by radar displays, supporting instrument flight tasks, or interacting with virtual realities. The development and application of new visualization techniques is of major importance for future technological progress.
RESUMO
The current study investigates the demands that steering places on mental resources. Instead of a conventional dual-task paradigm, participants of this study were only required to perform a steering task while task-irrelevant auditory distractor probes (environmental sounds and beep tones) were intermittently presented. The event-related potentials (ERPs), which were generated by these probes, were analyzed for their sensitivity to the steering task's demands. The steering task required participants to counteract unpredictable roll disturbances and difficulty was manipulated either by adjusting the bandwidth of the roll disturbance or by varying the complexity of the control dynamics. A mass univariate analysis revealed that steering selectively diminishes the amplitudes of early P3, late P3, and the re-orientation negativity (RON) to task-irrelevant environmental sounds but not to beep tones. Our findings are in line with a three-stage distraction model, which interprets these ERPs to reflect the post-sensory detection of the task-irrelevant stimulus, engagement, and re-orientation back to the steering task. This interpretation is consistent with our manipulations for steering difficulty. More participants showed diminished amplitudes for these ERPs in the "hard" steering condition relative to the "easy" condition. To sum up, the current work identifies the spatiotemporal ERP components of task-irrelevant auditory probes that are sensitive to steering demands on mental resources. This provides a non-intrusive method for evaluating mental workload in novel steering environments.
RESUMO
Video-based gaze-tracking systems are typically restricted in terms of their effective tracking space. This constraint limits the use of eyetrackers in studying mobile human behavior. Here, we compare two possible approaches for estimating the gaze of participants who are free to walk in a large space whilst looking at different regions of a large display. Geometrically, we linearly combined eye-in-head rotations and head-in-world coordinates to derive a gaze vector and its intersection with a planar display, by relying on the use of a head-mounted eyetracker and body-motion tracker. Alternatively, we employed Gaussian process regression to estimate the gaze intersection directly from the input data itself. Our evaluation of both methods indicates that a regression approach can deliver comparable results to a geometric approach. The regression approach is favored, given that it has the potential for further optimization, provides confidence bounds for its gaze estimates and offers greater flexibility in its implementation. Open-source software for the methods reported here is also provided for user implementation.
RESUMO
In this paper, we investigate the effect of haptic cueing on a human operator's performance in the field of bilateral teleoperation of multiple mobile robots, particularly multiple unmanned aerial vehicles (UAVs). Two aspects of human performance are deemed important in this area, namely, the maneuverability of mobile robots and the perceptual sensitivity of the remote environment. We introduce metrics that allow us to address these aspects in two psychophysical studies, which are reported here. Three fundamental haptic cue types were evaluated. The Force cue conveys information on the proximity of the commanded trajectory to obstacles in the remote environment. The Velocity cue represents the mismatch between the commanded and actual velocities of the UAVs and can implicitly provide a rich amount of information regarding the actual behavior of the UAVs. Finally, the Velocity+Force cue is a linear combination of the two. Our experimental results show that, while maneuverability is best supported by the Force cue feedback, perceptual sensitivity is best served by the Velocity cue feedback. In addition, we show that large gains in the haptic feedbacks do not always guarantee an enhancement in the teleoperator's performance.
Assuntos
Cibernética/métodos , Robótica/instrumentação , Robótica/métodos , Telecomunicações , Tato/fisiologia , Adulto , Desenho de Equipamento , Feminino , Humanos , Masculino , Psicofísica , Adulto JovemRESUMO
There is evidence that observers use learned object motion to recognize objects. For instance, studies have shown that reversing the learned direction in which a rigid object rotated in depth impaired recognition accuracy. This motion reversal can be achieved by playing animation sequences of moving objects in reverse frame order. In the current study, we used this sequence-reversal manipulation to investigate whether observers encode the motion of dynamic objects in visual memory, and whether such dynamic representations are encoded in a way that is dependent on the viewing conditions. Participants first learned dynamic novel objects, presented as animation sequences. Following learning, they were then tested on their ability to recognize these learned objects when their animation sequence was shown in the same sequence order as during learning or in the reverse sequence order. In Experiment 1, we found that non-rigid motion contributed to recognition performance; that is, sequence-reversal decreased sensitivity across different tasks. In subsequent experiments, we tested the recognition of non-rigidly deforming (Experiment 2) and rigidly rotating (Experiment 3) objects across novel viewpoints. Recognition performance was affected by viewpoint changes for both experiments. Learned non-rigid motion continued to contribute to recognition performance and this benefit was the same across all viewpoint changes. By comparison, learned rigid motion did not contribute to recognition performance. These results suggest that non-rigid motion provides a source of information for recognizing dynamic objects, which is not affected by changes to viewpoint.
RESUMO
Recent studies provide evidence for task-specific influences on saccadic eye movements. For instance, saccades exhibit higher peak velocity when the task requires coordinating eye and hand movements. The current study shows that the need to process task-relevant visual information at the saccade endpoint can be, in itself, sufficient to cause such effects. In this study, participants performed a visual discrimination task which required a saccade for successful completion. We compared the characteristics of these task-related saccades to those of classical target-elicited saccades, which required participants to fixate a visual target without performing a discrimination task. The results show that task-related saccades are faster and initiated earlier than target-elicited saccades. Differences between both saccade types are also noted in their saccade reaction time distributions and their main sequences, i.e., the relationship between saccade velocity, duration, and amplitude.