Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 55
Filter
1.
Article in English | MEDLINE | ID: mdl-39289308

ABSTRACT

PURPOSE: To determine the diagnostic performance and reliability of two pupil perimetry (PP) methods in homonymous hemianopia. METHODS: This cross-sectional monocenter cohort study performed gaze-contingent flicker PP (gcFPP) and a virtual reality version of gcFPP (VRgcFPP) twice on separate occasions in all patients suffering from homonymous hemianopia due to neurological impairment. The main outcomes were (1) test accuracy and (2) test-retest reliability: (1) was measured through area under the receiver operating characteristics curve (AUC) calculation of (VR)gcFPP results with comparators being SAP and healthy controls, respectively; (2) was evaluated by comparing tests 1 and 2 of both methods within patients. RESULTS: Both gcFPP and VRgcFPP were performed in 15 patients (12 males, MAge = 57, SDAge = 15) and 17 controls (6 males, MAge = 53, SDAge = 12). Mean test accuracy was good in separating damaged from intact visual field regions (gcFPP: Mauc = 0.83, SDauc = 0.09; VRgcFPP: Mauc = 0.69, SDauc = 0.13) and in separating patients from controls (gcFPP: Mauc = 0.92, SDauc = 0.13; VRgcFPP: Mauc = 0.96, SDauc = 0.15). A high test-retest reliability was found for the proportion intact versus damaged visual field (gcFPP: r = 0.95, P < .001, VRgcFPP: r = 1.00, P < .001). CONCLUSIONS: Overall, these results can be summarized as follows: (1) the comparison of pupil response amplitudes between intact versus damaged regions per patient indicate that gcFPP allows for cleaner imaging of intact versus damaged visual field regions than VRgcFPP, (2) the comparisons of average differences in intact versus damaged amplitudes between patients and controls demonstrate high diagnostic performance of both gcFPP and VRgcFPP, and (3) the test-retest reliabilities confirm that both gcFPP and VRgcFPP reliably and consistently measure defects in homonymous hemianopia. KEY  MESSAGES: What is known Standard automated perimetry is the current gold standard for visual field examination, but not always suited for the evaluation of the VF in neurologically impaired patients. Pupil perimetry consists of the measurement of pupillary responses to light stimuli as a measure of visual sensitivity. What is new This study reports the highest diagnostic accuracy of pupil perimetry so far in patients with homonymous hemianopia. Gaze-contingent flicker pupil perimetry reliably and consistently measures defects in homonymous hemianopia under standard and virtual reality viewing conditions.

2.
Behav Res Methods ; 56(7): 6904-6914, 2024 10.
Article in English | MEDLINE | ID: mdl-38632165

ABSTRACT

Remote photoplethysmography (rPPG) is a low-cost technique to measure physiological parameters such as heart rate by analyzing videos of a person. There has been growing attention to this technique due to the increased possibilities and demand for running psychological experiments on online platforms. Technological advancements in commercially available cameras and video processing algorithms have led to significant progress in this field. However, despite these advancements, past research indicates that suboptimal video recording conditions can severely compromise the accuracy of rPPG. In this study, we aimed to develop an open-source rPPG methodology and test its performance on videos collected via an online platform, without control of the hardware of the participants and the contextual variables, such as illumination, distance, and motion. Across two experiments, we compared the results of the rPPG extraction methodology to a validated dataset used for rPPG testing. Furthermore, we then collected 231 online video recordings and compared the results of the rPPG extraction to finger pulse oximeter data acquired with a validated mobile heart rate application. Results indicated that the rPPG algorithm was highly accurate, showing a significant degree of convergence with both datasets thus providing an improved tool for recording and analyzing heart rate in online experiments.


Subject(s)
Algorithms , Heart Rate , Photoplethysmography , Video Recording , Photoplethysmography/methods , Humans , Heart Rate/physiology , Video Recording/methods , Adult , Male , Female , Young Adult , Internet
3.
Psychol Sci ; 34(8): 887-898, 2023 08.
Article in English | MEDLINE | ID: mdl-37314425

ABSTRACT

Attention can be shifted with or without an accompanying saccade (i.e., overtly or covertly, respectively). Thus far, it is unknown how cognitively costly these shifts are, yet such quantification is necessary to understand how and when attention is deployed overtly or covertly. In our first experiment (N = 24 adults), we used pupillometry to show that shifting attention overtly is more costly than shifting attention covertly, likely because planning saccades is more complex. We pose that these differential costs will, in part, determine whether attention is shifted overtly or covertly in a given context. A subsequent experiment (N = 24 adults) showed that relatively complex oblique saccades are more costly than relatively simple saccades in horizontal or vertical directions. This provides a possible explanation for the cardinal-direction bias of saccades. The utility of a cost perspective as presented here is vital to furthering our understanding of the multitude of decisions involved in processing and interacting with the external world efficiently.


Subject(s)
Attention , Saccades , Humans , Adult , Salaries and Fringe Benefits
4.
J Vis ; 23(6): 9, 2023 Jun 01.
Article in English | MEDLINE | ID: mdl-37318440

ABSTRACT

What determines how much one encodes into visual working memory? Traditionally, encoding depth is considered to be indexed by spatiotemporal properties of gaze, such as gaze position and dwell time. Although these properties inform about where and how long one looks, they do not necessarily inform about the current arousal state or how strongly attention is deployed to facilitate encoding. Here, we found that two types of pupillary dynamics predict how much information is encoded during a copy task. The task involved encoding a spatial pattern of multiple items for later reproduction. Results showed that smaller baseline pupil sizes preceding and stronger pupil orienting responses during encoding predicted that more information was encoded into visual working memory. Additionally, we show that pupil size reflects not only how much but also how precisely material is encoded. We argue that a smaller pupil size preceding encoding is related to increased exploitation, whereas larger pupil constrictions signal stronger attentional (re)orienting to the to-be-encoded pattern. Our findings support the notion that the depth of visual working memory encoding is the integrative outcome of differential aspects of attention: how alert one is, how much attention one deploys, and how long it is deployed. Together, these factors determine how much information is encoded into visual working memory.


Subject(s)
Attention , Memory, Short-Term , Humans , Memory, Short-Term/physiology , Attention/physiology , Pupil/physiology
5.
Cogn Emot ; 37(6): 1105-1115, 2023.
Article in English | MEDLINE | ID: mdl-37395739

ABSTRACT

For human interaction, it is important to understand what emotional state others are in. Especially the observation of faces aids us in putting behaviours into context and gives insight into emotions and mental states of others. Detecting whether someone is nervous, a form of state anxiety, is such an example as it reveals a person's familiarity and contentment with the circumstances. With recent developments in computer vision we developed behavioural nervousness models to show which time-varying facial cues reveal whether someone is nervous in an interview setting. The facial changes, reflecting a state of anxiety, led to more visual exposure and less chemosensory (taste and olfaction) exposure. However, experienced observers had difficulty picking up these changes and failed to detect nervousness levels accurately therewith. This study highlights humans' limited capacity in determining complex emotional states but at the same time provides an automated model that can assist us in achieving fair assessments of so far unexplored emotional states.


Subject(s)
Anxiety , Emotions , Humans , Anxiety/psychology , Emotions/physiology , Anxiety Disorders , Smell , Facial Expression , Computers
6.
Behav Res Methods ; 2023 Dec 11.
Article in English | MEDLINE | ID: mdl-38082113

ABSTRACT

Pupil size change is a widely adopted, sensitive indicator for sensory and cognitive processes. However, the interpretation of these changes is complicated by the influence of multiple low-level effects, such as brightness or contrast changes, posing challenges to applying pupillometry outside of extremely controlled settings. Building on and extending previous models, we here introduce Open Dynamic Pupil Size Modeling (Open-DPSM), an open-source toolkit to model pupil size changes to dynamically changing visual inputs using a convolution approach. Open-DPSM incorporates three key steps: (1) Modeling pupillary responses to both luminance and contrast changes; (2) Weighing of the distinct contributions of visual events across the visual field on pupil size change; and (3) Incorporating gaze-contingent visual event extraction and modeling. These steps improve the prediction of pupil size changes beyond the here-evaluated benchmarks. Open-DPSM provides Python functions, as well as a graphical user interface (GUI), enabling the extension of its applications to versatile scenarios and adaptations to individualized needs. By obtaining a predicted pupil trace using video and eye-tracking data, users can mitigate the effects of low-level features by subtracting the predicted trace or assess the efficacy of the low-level feature manipulations a priori by comparing estimated traces across conditions.

7.
J Vis ; 22(9): 7, 2022 08 01.
Article in English | MEDLINE | ID: mdl-35998063

ABSTRACT

To this day, the most popular method of choice for testing visual field defects (VFDs) is subjective standard automated perimetry. However, a need has arisen for an objective, and less time-consuming method. Pupil perimetry (PP), which uses pupil responses to onsets of bright stimuli as indications of visual sensitivity, fulfills these requirements. It is currently unclear which PP method most accurately detects VFDs. Hence, the purpose of this study is to compare three PP methods for measuring pupil responsiveness. Unifocal (UPP), flicker (FPP), and multifocal PP (MPP) were compared by monocularly testing the inner 60 degrees of vision at 44 wedge-shaped locations. The visual field (VF) sensitivity of 18 healthy adult participants (mean age and SD 23.7 ± 3.0 years) was assessed, each under three different artificially simulated scotomas for approximately 4.5 minutes each (i.e. stimulus was not or only partially present) conditions: quadrantanopia, a 20-, and 10-degree diameter scotoma. Stimuli that were fully present on the screen evoked strongest, partially present stimuli evoked weaker, and absent stimuli evoked the weakest pupil responses in all methods. However, the pupil responses in FPP showed stronger discriminative power for present versus absent trials (median d-prime = 6.26 ± 2.49, area under the curve [AUC] = 1.0 ± 0) and MPP performed better for fully present versus partially present trials (median d-prime = 1.19 ± 0.62, AUC = 0.80 ± 0.11). We conducted the first in-depth comparison of three PP methods. Gaze-contingent FPP had best discriminative power for large (absolute) scotomas, whereas MPP performed slightly better with small (relative) scotomas.


Subject(s)
Pupil , Visual Field Tests , Adult , Humans , Pupil/physiology , Scotoma , Vision Disorders/diagnosis , Visual Field Tests/methods , Visual Fields
8.
Conscious Cogn ; 87: 103057, 2021 01.
Article in English | MEDLINE | ID: mdl-33307426

ABSTRACT

The content of visual working memory influences the access to visual awareness. Thus far, research has focused on retention of a single feature, whereas memoranda in real life typically contain multiple features. Here, we intermixed a delayed match-to-sample task to manipulate VWM content, and a breaking Continuous Flash Suppression (b-CFS) task to measure prioritization for visual awareness. Observers memorized either the color (Exp. 1), the shape (Exp. 2) or both the features (Exp. 3) of an item and indicated the location of a suppressed target. We observed that color-matching targets broke suppression faster than color-mismatching targets both when color was memory relevant or irrelevant. Shape only impacted priority for visual awareness through an interaction with color. We conclude that: (1) VWM can regulate the priority of visual information to access visual awareness along a single feature dimension; (2) different features of a memorandum vary in their potency to impact access to visual awareness, and the more dominant feature may even suppress the effect of the less dominant feature; (3) even stimuli that match an irrelevant feature dimension of the memorandum can be prioritized for visual awareness.


Subject(s)
Memory, Short-Term , Visual Perception , Humans
9.
J Vis ; 20(10): 16, 2020 10 01.
Article in English | MEDLINE | ID: mdl-33057622

ABSTRACT

Before looking at or reaching for an object, the focus of attention is first allocated to the movement object. Here we investigated whether the strength of these pre-motor shifts of attention cumulates if an object is targeted by multiple effectors (eyes and hands). A total of 29 participants were tested on a visuomotor task. They were cued to move gaze, the left hand, right hand, or both (one to three effectors) to a common object or to different peripheral objects. Before the movements, eight possible objects briefly changed form, of which one was a distinct probe. Results showed that the average recognition of the probe's identity change increased as more effectors targeted this object. For example, performance was higher when two hands as compared to one hand were moved to the probe. This effect remained evident despite the detrimental effect on performance of the increase in motor task complexity of moving two hands as compared to one hand. The accumulation of recognition improvements as a function of the number of effectors that successfully target the probe points at parallel and presumably independent mechanisms for hand- and eye-coordination that evoke pre-motor shifts of attention.


Subject(s)
Eye Movements/physiology , Hand/physiology , Visual Perception/physiology , Attention/physiology , Female , Functional Laterality/physiology , Humans , Male , Psychomotor Performance/physiology , Young Adult
10.
J Vis ; 19(11): 9, 2019 09 03.
Article in English | MEDLINE | ID: mdl-31532471

ABSTRACT

Both visual working memory (VWM) and visual saliency influence sensory processing, as is evident from research on visual attention and visual awareness. It is generally observed that items that are memorized or salient receive priority in visual search and in the access to awareness. Here we investigate whether these two factors interact and together boost access to visual awareness more than each factor independently. In the present experiment, we manipulated the VWM relevance and saliency of an item through a color memorization task and color uniqueness, respectively. We applied continuous flash suppression (CFS) to suppress items from visual awareness. The color of the suppressed items could either be congruent or incongruent with the memorized color, and either stood out from its surrounding distractors (salient pop out) or not. The item's priority for visual awareness was measured by measuring the time it took for an item to "break" into awareness. We first show that VWM relevance and visual saliency each shortened the time needed for an item to access awareness. More interestingly, the combined effect of VWM and visual salience was additive; that is, items that were both congruent and salient broke into visual awareness even faster. A race model further suggests that the interaction between these two mechanisms can be explained by statistical facilitation. Thus, VWM and saliency influence the priority to access visual awareness independently.


Subject(s)
Awareness/physiology , Memory, Short-Term/physiology , Visual Perception/physiology , Adult , Analysis of Variance , Attention/physiology , Biometry , Color , Female , Humans , Male , Photic Stimulation , Sensation/physiology , Young Adult
11.
Behav Res Methods ; 51(5): 2106-2119, 2019 10.
Article in English | MEDLINE | ID: mdl-31152386

ABSTRACT

Recent developments in computer science and digital image processing have enabled the extraction of an individual's heart pulsations from pixel changes in recorded video images of human skin surfaces. This method is termed remote photoplethysmography (rPPG) and can be achieved with consumer-level cameras (e.g., a webcam or mobile camera). The goal of the present publication is two-fold. First, we aim to organize future rPPG software developments in a tractable and nontechnical manner, such that the public gains access to a basic open-source rPPG code, comes to understand its utility, and can follow its most recent progressions. The second goal is to investigate rPPG's accuracy in detecting heart rates from the skin surfaces of several body parts after physical exercise and under ambient lighting conditions with a consumer-level camera. We report that rPPG is highly accurate when the camera is aimed at facial skin tissue, but that the heart rate recordings from wrist regions are less reliable, and recordings from the calves are unreliable. Facial rPPG remained accurate despite the high heart rates after exercise. The proposed research procedures and the experimental findings provide guidelines for future studies on rPPG.


Subject(s)
Heart Rate , Adolescent , Adult , Algorithms , Animals , Cattle , Face , Female , Humans , Male , Photoplethysmography , Video Recording , Young Adult
12.
J Vis ; 18(6): 6, 2018 06 01.
Article in English | MEDLINE | ID: mdl-30029217

ABSTRACT

It is commonly assumed that one eye is dominant over the other eye. Eye dominance is most frequently determined by using the hole-in-the-card test. However, it is currently unclear whether eye dominance as determined by the hole-in-the-card test (so-called sighting eye dominance) generalizes to tasks involving interocular conflict (engaging sensory eye dominance). We therefore investigated whether sighting eye dominance is linked to sensory eye dominance in several frequently used paradigms that involve interocular conflict. Eye dominance was measured by the hole-in-the-card test, binocular rivalry, and breaking continuous flash suppression (b-CFS). Relationships between differences in eye dominance were assessed using Bayesian statistics. Strikingly, none of the three interocular conflict tasks yielded a difference in perceptual report between eyes when comparing the dominant eye with the nondominant eye as determined by the hole-in-the-card test. From this, we conclude that sighting eye dominance is different from sensory eye dominance. Interestingly, eye dominance of onset rivalry correlated with that of ongoing rivalry but not with that of b-CFS. Hence, we conclude that b-CFS reflects a different form of eye dominance than onset and ongoing rivalry. In sum, eye dominance seems to be a multifaceted phenomenon, which is differently expressed across interocular conflict paradigms. Finally, we highly discourage using tests measuring sighting eye dominance to determine the dominant eye in a subsequent experiment involving interocular conflict. Rather, we recommend that whenever experimental manipulations require a priori knowledge of eye dominance, eye dominance should be determined using pretrials of the same task that will be used in the main experiment.


Subject(s)
Dominance, Ocular/physiology , Generalization, Response/physiology , Photic Stimulation , Vision Disparity/physiology , Vision, Binocular/physiology , Adolescent , Adult , Bayes Theorem , Biometry , Decision Making , Female , Humans , Male , Visual Perception/physiology , Young Adult
13.
Behav Res Methods ; 49(4): 1303-1309, 2017 08.
Article in English | MEDLINE | ID: mdl-27631989

ABSTRACT

In several research contexts it is important to obtain eye-tracking measures while presenting visual stimuli independently to each of the two eyes (dichoptic stimulation). However, the hardware that allows dichoptic viewing, such as mirrors, often interferes with high-quality eye tracking, especially when using a video-based eye tracker. Here we detail an approach to combining mirror-based dichoptic stimulation with video-based eye tracking, centered on the fact that some mirrors, although they reflect visible light, are selectively transparent to the infrared wavelength range in which eye trackers record their signal. Although the method we propose is straightforward, affordable (on the order of US$1,000) and easy to implement, for many purposes it makes for an improvement over existing methods, which tend to require specialized equipment and often compromise on the quality of the visual stimulus and/or the eye tracking signal. The proposed method is compatible with standard display screens and eye trackers, and poses no additional limitations on the quality or nature of the stimulus presented or the data obtained. We include an evaluation of the quality of eye tracking data obtained using our method, and a practical guide to building a specific version of the setup used in our laboratories.


Subject(s)
Eye Movements/physiology , Photic Stimulation/methods , Humans
14.
Proc Natl Acad Sci U S A ; 115(50): E11565, 2018 12 11.
Article in English | MEDLINE | ID: mdl-30487227

Subject(s)
Pupil , Trust , Biomimetics
15.
Proc Natl Acad Sci U S A ; 110(50): 20046-50, 2013 Dec 10.
Article in English | MEDLINE | ID: mdl-24277821

ABSTRACT

Imitation typically occurs in social contexts where people interact and have common goals. Here, we show that people are also highly susceptible to imitate each other in a competitive context. Pairs of players performed a competitive and fast-reaching task (a variant of the arcade whac-a-mole game) in which money could be earned if players hit brief-appearing visual targets on a large touchscreen before their opponents. In three separate experiments, we demonstrate that reaction times and movements were highly correlated within pairs of players. Players affected their success by imitating each other, and imitation depended on the visibility of the opponent's behavior. Imitation persisted, despite the competitive and demanding nature of the game, even if this resulted in lower scores and payoffs and even when there was no need to counteract the opponent's actions.


Subject(s)
Competitive Behavior/physiology , Imitative Behavior/physiology , Games, Experimental , Humans , Models, Psychological , Reaction Time/physiology
16.
J Neurosci ; 34(5): 1738-47, 2014 Jan 29.
Article in English | MEDLINE | ID: mdl-24478356

ABSTRACT

When two dissimilar stimuli are presented to the eyes, perception alternates between multiple interpretations, a phenomenon dubbed binocular rivalry. Numerous recent imaging studies have attempted to unveil neural substrates underlying multistable perception. However, these studies had a conceptual constraint: access to observers' perceptual state relied on their introspection and active report. Here, we investigated to what extent neural correlates of binocular rivalry in healthy humans are confounded by this subjective measure and by action. We used the optokinetic nystagmus and pupil size to objectively and continuously map perceptual alternations for binocular-rivalry stimuli. Combining these two measures with fMRI allowed us to assess the neural correlates of binocular rivalry time locked to the perceptual alternations in the absence of active report. When observers were asked to actively report their percept, our objective measures matched the report. In this active condition, objective measures and subjective reporting revealed that occipital, parietal, and frontal areas underlie the processing of binocular rivalry, replicating earlier findings. Furthermore, objective measures provided additional statistical power due to their continuous nature. Importantly, when observers passively experienced rivalry without reporting perceptual alternations, a different picture emerged: differential neural activity in frontal areas was absent, whereas activation in occipital and parietal regions persisted. Our results question the popular view of a driving role of frontal areas in the initiation of perceptual alternations during binocular rivalry. Instead, we conclude that frontal areas are associated with active report and introspection rather than with rivalry per se.


Subject(s)
Frontal Lobe/physiology , Adolescent , Adult , Female , Frontal Lobe/blood supply , Humans , Male , Young Adult
17.
Wiley Interdiscip Rev Cogn Sci ; 15(2): e1668, 2024.
Article in English | MEDLINE | ID: mdl-37933423

ABSTRACT

Pupillary dynamics reflect effects of distinct and important operations of visual working memory: encoding, maintenance, and prioritization. Here, we review how pupil size predicts memory performance and how it provides novel insights into the mechanisms of each operation. Visual information must first be encoded into working memory with sufficient precision. The depth of this encoding process couples to arousal-linked baseline pupil size as well as a pupil constriction response before and after stimulus onset, respectively. Subsequently, the encoded information is maintained over time to ensure it is not lost. Pupil dilation reflects the effortful maintenance of information, wherein storing more items is accompanied by larger dilations. Lastly, the most task-relevant information is prioritized to guide upcoming behavior, which is reflected in yet another dilatory component. Moreover, activated content in memory can be pupillometrically probed directly by tagging visual information with distinct luminance levels. Through this luminance-tagging mechanism, pupil light responses reveal whether dark or bright items receive more attention during encoding and prioritization. Together, conceptualizing pupil responses as a sum of distinct components over time reveals insights into operations of visual working memory. From this viewpoint, pupillometry is a promising avenue to study the most vital operations through which visual working memory works. This article is categorized under: Psychology > Attention Psychology > Memory Psychology > Theory and Methods.


Subject(s)
Memory, Short-Term , Pupil , Humans , Memory, Short-Term/physiology , Pupil/physiology , Cognition
18.
J Cogn ; 7(1): 8, 2024.
Article in English | MEDLINE | ID: mdl-38223232

ABSTRACT

Not only is visual attention shifted to objects in the external world, attention can also be directed to objects in memory. We have recently shown that pupil size indexes how strongly items are attended externally, which was reflected in more precise encoding into visual working memory. Using a retro-cuing paradigm, we here replicated this finding by showing that stronger pupil constrictions during encoding were reflective of the depth of encoding. Importantly, we extend this previous work by showing that pupil size also revealed the intensity of internal attention toward content stored in visual working memory. Specifically, pupil dilation during the prioritization of one among multiple internally stored representations predicted the precision of the prioritized item. Furthermore, the dynamics of the pupillary responses revealed that the intensity of internal and external attention independently determined the precision of internalized visual representations. Our results show that both internal and external attention are not all-or-none processes, but should rather be thought of as continuous resources that can be deployed at varying intensities. The employed pupillometric approach allows to unravel the intricate interplay between internal and external attention and their effects on visual working memory.

19.
Front Neuroergon ; 5: 1338243, 2024.
Article in English | MEDLINE | ID: mdl-38559665

ABSTRACT

Automatically detecting mental state such as stress from video images of the face could support evaluating stress responses in applicants for high risk jobs or contribute to timely stress detection in challenging operational settings (e.g., aircrew, command center operators). Challenges in automatically estimating mental state include the generalization of models across contexts and across participants. We here aim to create robust models by training them using data from different contexts and including physiological features. Fifty-one participants were exposed to different types of stressors (cognitive, social evaluative and startle) and baseline variants of the stressors. Video, electrocardiogram (ECG), electrodermal activity (EDA) and self-reports (arousal and valence) were recorded. Logistic regression models aimed to classify between high and low arousal and valence across participants, where "high" and "low" were defined relative to the center of the rating scale. Accuracy scores of different models were evaluated: models trained and tested within a specific context (either a baseline or stressor variant of a task), intermediate context (baseline and stressor variant of a task), or general context (all conditions together). Furthermore, for these different model variants, only the video data was included, only the physiological data, or both video and physiological data. We found that all (video, physiological and video-physio) models could successfully distinguish between high- and low-rated arousal and valence, though performance tended to be better for (1) arousal than valence, (2) specific context than intermediate and general contexts, (3) video-physio data than video or physiological data alone. Automatic feature selection resulted in inclusion of 3-20 features, where the models based on video-physio data usually included features from video, ECG and EDA. Still, performance of video-only models approached the performance of video-physio models. Arousal and valence ratings by three experienced human observers scores based on part of the video data did not match with self-reports. In sum, we showed that it is possible to automatically monitor arousal and valence even in relatively general contexts and better than humans can (in the given circumstances), and that non-contact video images of faces capture an important part of the information, which has practical advantages.

20.
J Vis ; 13(6)2013 May 17.
Article in English | MEDLINE | ID: mdl-23685390

ABSTRACT

The link between arousal and pupil dilation is well studied, but it is less known that other cognitive processes can trigger pupil responses. Here we present evidence that pupil responses can be induced by high-level scene processing, independent of changes in low-level features or arousal. In Experiment 1, we recorded changes in pupil diameter of observers while they viewed a variety of natural scenes with or without a sun that were presented either upright or inverted. Image inversion had the strongest effect on the pupil responses. The pupil constricted more to the onset of upright images as compared to inverted images. Furthermore, the amplitudes of pupil constrictions to viewing images containing a sun were larger relative to control images. In Experiment 2, we presented cartoon versions of upright and inverted pictures that included either a sun or a moon. The image backgrounds were kept identical across conditions. Similar to Experiment 1, upright images triggered pupil constrictions with larger amplitudes than inverted images and images of the sun evoked greater pupil contraction than images of the moon. We suggest that the modulations of pupil responses were due to higher-level interpretations of image content.


Subject(s)
Arousal/physiology , Reflex, Pupillary/physiology , Visual Perception/physiology , Adult , Humans , Lighting , Photic Stimulation/methods
SELECTION OF CITATIONS
SEARCH DETAIL