Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
Front Psychol ; 12: 748539, 2021.
Article in English | MEDLINE | ID: mdl-34992563

ABSTRACT

Pupil size is influenced by cognitive and non-cognitive factors. One of the strongest modulators of pupil size is scene luminance, which complicates studies of cognitive pupillometry in environments with complex patterns of visual stimulation. To help understand how dynamic visual scene statistics influence pupil size during an active visual search task in a visually rich 3D virtual environment (VE), we analyzed the correlation between pupil size and intensity changes of image pixels in the red, green, and blue (RGB) channels within a large window (~14 degrees) surrounding the gaze position over time. Overall, blue and green channels had a stronger influence on pupil size than the red channel. The correlation maps were not consistent with the hypothesis of a foveal bias for luminance, instead revealing a significant contextual effect, whereby pixels above the gaze point in the green/blue channels had a disproportionate impact on pupil size. We hypothesized this differential sensitivity of pupil responsiveness to blue light from above as a "blue sky effect," and confirmed this finding with a follow-on experiment with a controlled laboratory task. Pupillary constrictions were significantly stronger when blue was presented above fixation (paired with luminance-matched gray on bottom) compared to below fixation. This effect was specific for the blue color channel and this stimulus orientation. These results highlight the differential sensitivity of pupillary responses to scene statistics in studies or applications that involve complex visual environments and suggest blue light as a predominant factor influencing pupil size.

2.
Netw Neurosci ; 4(3): 611-636, 2020.
Article in English | MEDLINE | ID: mdl-32885118

ABSTRACT

An overarching goal of neuroscience research is to understand how heterogeneous neuronal ensembles cohere into networks of coordinated activity to support cognition. To investigate how local activity harmonizes with global signals, we measured electroencephalography (EEG) while single pulses of transcranial magnetic stimulation (TMS) perturbed occipital and parietal cortices. We estimate the rapid network reconfigurations in dynamic network communities within specific frequency bands of the EEG, and characterize two distinct features of network reconfiguration, flexibility and allegiance, among spatially distributed neural sources following TMS. Using distance from the stimulation site to infer local and global effects, we find that alpha activity (8-12 Hz) reflects concurrent local and global effects on network dynamics. Pairwise allegiance of brain regions to communities on average increased near the stimulation site, whereas TMS-induced changes to flexibility were generally invariant to distance and stimulation site. In contrast, communities within the beta (13-20 Hz) band demonstrated a high level of spatial specificity, particularly within a cluster comprising paracentral areas. Together, these results suggest that focal magnetic neurostimulation to distinct cortical sites can help identify both local and global effects on brain network dynamics, and highlight fundamental differences in the manifestation of network reconfigurations within alpha and beta frequency bands.

3.
PLoS One ; 15(3): e0230517, 2020.
Article in English | MEDLINE | ID: mdl-32203562

ABSTRACT

Pupil size modulations have been used for decades as a window into the mind, and several pupillary features have been implicated in a variety of cognitive processes. Thus, a general challenge facing the field of pupillometry has been understanding which pupil features should be most relevant for explaining behavior in a given task domain. In the present study, a longitudinal design was employed where participants completed 8 biweekly sessions of a classic mental arithmetic task for the purposes of teasing apart the relationships between tonic/phasic pupil features (baseline, peak amplitude, peak latency) and two task-related cognitive processes including mental processing load (indexed by math question difficulty) and decision making (indexed by response times). We used multi-level modeling to account for individual variation while identifying pupil-to-behavior relationships at the single-trial and between-session levels. We show a dissociation between phasic and tonic features with peak amplitude and latency (but not baseline) driven by ongoing task-related processing, whereas baseline was driven by state-level effects that changed over a longer time period (i.e. weeks). Finally, we report a dissociation between peak amplitude and latency whereby amplitude reflected surprise and processing load, and latency reflected decision making times.


Subject(s)
Cognition , Pupil/physiology , Thinking , Attention , Decision Making , Female , Humans , Longitudinal Studies , Male , Reaction Time
4.
Neuroimage ; 186: 647-666, 2019 02 01.
Article in English | MEDLINE | ID: mdl-30500424

ABSTRACT

Existing data indicate that cortical speech processing is hierarchically organized. Numerous studies have shown that early auditory areas encode fine acoustic details while later areas encode abstracted speech patterns. However, it remains unclear precisely what speech information is encoded across these hierarchical levels. Estimation of speech-driven spectrotemporal receptive fields (STRFs) provides a means to explore cortical speech processing in terms of acoustic or linguistic information associated with characteristic spectrotemporal patterns. Here, we estimate STRFs from cortical responses to continuous speech in fMRI. Using a novel approach based on filtering randomly-selected spectrotemporal modulations (STMs) from aurally-presented sentences, STRFs were estimated for a group of listeners and categorized using a data-driven clustering algorithm. 'Behavioral STRFs' highlighting STMs crucial for speech recognition were derived from intelligibility judgments. Clustering revealed that STRFs in the supratemporal plane represented a broad range of STMs, while STRFs in the lateral temporal lobe represented circumscribed STM patterns important to intelligibility. Detailed analysis recovered a bilateral organization with posterior-lateral regions preferentially processing STMs associated with phonological information and anterior-lateral regions preferentially processing STMs associated with word- and phrase-level information. Regions in lateral Heschl's gyrus preferentially processed STMs associated with vocalic information (pitch).


Subject(s)
Auditory Cortex/physiology , Brain Mapping/methods , Language , Speech Intelligibility/physiology , Speech Perception/physiology , Adult , Auditory Cortex/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
5.
Transl Vis Sci Technol ; 7(5): 28, 2018 Sep.
Article in English | MEDLINE | ID: mdl-30356944

ABSTRACT

PURPOSE: In order to monitor visual defects associated with macular degeneration (MD), we present a new psychophysical assessment called multiline adaptive perimetry (MAP) that measures visual field integrity by simultaneously estimating regions associated with perceptual distortions (metamorphopsia) and visual sensitivity loss (scotoma). METHODS: We first ran simulations of MAP with a computerized model of a human observer to determine optimal test design characteristics. In experiment 1, predictions of the model were assessed by simulating metamorphopsia with an eye-tracking device with 20 healthy vision participants. In experiment 2, eight patients (16 eyes) with macular disease completed two MAP assessments separated by about 12 weeks, while a subset (10 eyes) also completed repeated Macular Integrity Assessment (MAIA) microperimetry and Amsler grid exams. RESULTS: Results revealed strong repeatability of MAP and high accuracy, sensitivity, and specificity (0.89, 0.81, and 0.90, respectively) in classifying patient eyes with severe visual impairment. We also found a significant relationship in terms of the spatial patterns of performance across visual field loci derived from MAP and MAIA microperimetry. However, there was a lack of correspondence between MAP and subjective Amsler grid reports in isolating perceptually distorted regions. CONCLUSIONS: These results highlight the validity and efficacy of MAP in producing quantitative maps of visual field disturbances, including simultaneous mapping of metamorphopsia and sensitivity impairment. TRANSLATIONAL RELEVANCE: Future work will be needed to assess applicability of this examination for potential early detection of MD symptoms and/or portable assessment on a home device or computer.

6.
Front Psychol ; 9: 899, 2018.
Article in English | MEDLINE | ID: mdl-29962982

ABSTRACT

Contrast sensitivity (CS), the ability to detect small spatial changes of luminance, is a fundamental aspect of vision. However, while visual acuity is commonly measured in eye clinics, CS is often not assessed. At issue is that tests of CS are not highly standardized in the field and that, in many cases, optotypes used are not sensitive enough to measure graduations of performance and visual abilities within the normal range. Here, in order to develop more sensitive measures of CS, we examined how CS is affected by different combinations of glare and ambient lighting in young healthy participants. We found that low levels of glare have a relatively small impact on vision under both photopic and mesopic conditions, while higher levels had significantly greater consequences on CS under mesopic conditions. Importantly, we found that the amount of glare induced by a standard built-in system (69 lux) was insufficient to induce CS reduction, but increasing to 125 lux with a custom system did cause a significant reduction and shift of CS in healthy individuals. This research provides important data that can help guide the use of CS measures that yield more sensitivity to characterize visual processing abilities in a variety of populations with ecological validity for non-ideal viewing conditions such as night time driving.

7.
PLoS One ; 13(1): e0191883, 2018.
Article in English | MEDLINE | ID: mdl-29377925

ABSTRACT

There is extensive laboratory research studying the effects of acute sleep deprivation on biological and cognitive functions, yet much less is known about naturalistic patterns of sleep loss and the potential impact on daily or weekly functioning of an individual. Longitudinal studies are needed to advance our understanding of relationships between naturalistic sleep and fluctuations in human health and performance, but it is first necessary to understand the efficacy of current tools for long-term sleep monitoring. The present study used wrist actigraphy and sleep log diaries to obtain daily measurements of sleep from 30 healthy adults for up to 16 consecutive weeks. We used non-parametric Bland-Altman analysis and correlation coefficients to calculate agreement between subjectively and objectively measured variables including sleep onset time, sleep offset time, sleep onset latency, number of awakenings, the amount of wake time after sleep onset, and total sleep time. We also examined compliance data on the submission of daily sleep logs according to the experimental protocol. Overall, we found strong agreement for sleep onset and sleep offset times, but relatively poor agreement for variables related to wakefulness including sleep onset latency, awakenings, and wake after sleep onset. Compliance tended to decrease significantly over time according to a linear function, but there were substantial individual differences in overall compliance rates. There were also individual differences in agreement that could be explained, in part, by differences in compliance. Individuals who were consistently more compliant over time also tended to show the best agreement and lower scores on behavioral avoidance scale (BIS). Our results provide evidence for convergent validity in measuring sleep onset and sleep offset with wrist actigraphy and sleep logs, and we conclude by proposing an analysis method to mitigate the impact of non-compliance and measurement errors when the two methods provide discrepant estimates.


Subject(s)
Actigraphy/methods , Documentation , Guideline Adherence , Sleep , Wrist , Adolescent , Adult , Female , Healthy Volunteers , Humans , Longitudinal Studies , Male , Personality , Young Adult
8.
J Vis ; 16(15): 15, 2016 12 01.
Article in English | MEDLINE | ID: mdl-28006065

ABSTRACT

Contrast sensitivity (CS) is widely used as a measure of visual function in both basic research and clinical evaluation. There is conflicting evidence on the extent to which measuring the full contrast sensitivity function (CSF) offers more functionally relevant information than a single measurement from an optotype CS test, such as the Pelli-Robson chart. Here we examine the relationship between functional CSF parameters and other measures of visual function, and establish a framework for predicting individual CSFs with effectively a zero-parameter model that shifts a standard-shaped template CSF horizontally and vertically according to independent measurements of high contrast acuity and letter CS, respectively. This method was evaluated for three different CSF tests: a chart test (CSV-1000), a computerized sine-wave test (M&S Sine Test), and a recently developed adaptive test (quick CSF). Subjects were 43 individuals with healthy vision or impairment too mild to be considered low vision (acuity range of -0.3 to 0.34 logMAR). While each test demands a slightly different normative template, results show that individual subject CSFs can be predicted with roughly the same precision as test-retest repeatability, confirming that individuals predominantly differ in terms of peak CS and peak spatial frequency. In fact, these parameters were sufficiently related to empirical measurements of acuity and letter CS to permit accurate estimation of the entire CSF of any individual with a deterministic model (zero free parameters). These results demonstrate that in many cases, measuring the full CSF may provide little additional information beyond letter acuity and contrast sensitivity.


Subject(s)
Contrast Sensitivity/physiology , Vision Tests/methods , Visual Acuity/physiology , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Vision, Low/physiopathology , Young Adult
9.
Neuroimage ; 136: 149-61, 2016 Aug 01.
Article in English | MEDLINE | ID: mdl-27164327

ABSTRACT

The adaptive nature of biological motion perception has been documented in behavioral studies, with research showing that prolonged viewing of an action can bias judgments of subsequent actions towards the opposite of its attributes. However, the neural mechanisms underlying action adaptation aftereffects remain unknown. We examined adaptation-induced changes in brain responses to an ambiguous action after adapting to walking or running actions within two bilateral regions of interest: 1) human middle temporal area (hMT+), a lower-level motion-sensitive region of cortex, and 2) posterior superior temporal sulcus (pSTS), a higher-level action-selective area. We found a significant correlation between neural adaptation strength in right pSTS and perceptual aftereffects to biological motion measured behaviorally, but not in hMT+. The magnitude of neural adaptation in right pSTS was also strongly correlated with individual differences in the degree of autistic traits. Participants with more autistic traits exhibited less adaptation-induced modulations of brain responses in right pSTS and correspondingly weaker perceptual aftereffects. These results suggest a direct link between perceptual aftereffects and adaptation of neural populations in right pSTS after prolonged viewing of a biological motion stimulus, and highlight the potential importance of this brain region for understanding differences in social-cognitive processing along the autistic spectrum.


Subject(s)
Adaptation, Physiological/physiology , Autistic Disorder/physiopathology , Locomotion/physiology , Motion Perception/physiology , Nerve Net/physiology , Neuronal Plasticity/physiology , Wernicke Area/physiopathology , Brain Mapping , Female , Humans , Male , Young Adult
10.
J Vis ; 16(1): 1, 2016.
Article in English | MEDLINE | ID: mdl-26746875

ABSTRACT

Although there is evidence for specialization in the human brain for processing biological motion per se, few studies have directly examined the specialization of form processing in biological motion perception. The current study was designed to systematically compare form processing in perception of biological (human walkers) to nonbiological (rotating squares) stimuli. Dynamic form-based stimuli were constructed with conflicting form cues (position and orientation), such that the objects were perceived to be moving ambiguously in two directions at once. In Experiment 1, we used the classification image technique to examine how local form cues are integrated across space and time in a bottom-up manner. By comparing with a Bayesian observer model that embodies generic principles of form analysis (e.g., template matching) and integrates form information according to cue reliability, we found that human observers employ domain-general processes to recognize both human actions and nonbiological object movements. Experiments 2 and 3 found differential top-down effects of spatial context on perception of biological and nonbiological forms. When a background does not involve social information, observers are biased to perceive foreground object movements in the direction opposite to surrounding motion. However, when a background involves social cues, such as a crowd of similar objects, perception is biased toward the same direction as the crowd for biological walking stimuli, but not for rotating nonbiological stimuli. The model provided an accurate account of top-down modulations by adjusting the prior probabilities associated with the internal templates, demonstrating the power and flexibility of the Bayesian approach for visual form perception.


Subject(s)
Form Perception/physiology , Motion Perception/physiology , Bayes Theorem , Cues , Female , Humans , Male , Orientation , Reproducibility of Results , Vision, Ocular , Young Adult
11.
Atten Percept Psychophys ; 78(2): 583-601, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26669309

ABSTRACT

Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Photic Stimulation/methods , Speech Perception/physiology , Statistics as Topic/methods , Visual Perception/physiology , Adult , Female , Humans , Male , Phonetics , Psychophysics , Speech/physiology , Time Factors
12.
Atten Percept Psychophys ; 78(1): 30-6, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26603043

ABSTRACT

Human actions are complex dynamic stimuli comprised of two principle motion components: 1) common body motion, which represents the translation of the body when a person moves through space, and 2) relative limb movements, resulting from articulation of limbs after factoring out common body motion. Historically, most research in biological motion has focused primarily on relative limb movements while discounting the role of common body motion in human action perception. The current study examined the relative contribution of posture change resulting from relative limb movements and translation of body position resulting from common body motion in discriminating human walking versus running actions. We found that faster translation speeds of common body motion evoked significantly more responses consistent with running when discriminating ambiguous actions morphed between walking and running. Furthermore, this influence was systematically modulated by the uncertainty associated with intrinsic cues as determined by the degree of limited-lifetime spatial sampling. The contribution of common body motion increased monotonically as the reliability of inferring posture changes on the basis of intrinsic cues decreased. These results highlight the importance of translational body movements and their interaction with posture change as a result of relative limb movements in discriminating human actions when visual input information is sparse and noisy.


Subject(s)
Cues , Motion Perception/physiology , Photic Stimulation/methods , Walking/physiology , Walking/psychology , Female , Humans , Male , Motion , Movement/physiology , Posture/physiology , Reproducibility of Results , Young Adult
13.
PLoS One ; 9(11): e112539, 2014.
Article in English | MEDLINE | ID: mdl-25406075

ABSTRACT

It is vitally important for humans to detect living creatures in the environment and to analyze their behavior to facilitate action understanding and high-level social inference. The current study employed naturalistic point-light animations to examine the ability of human observers to spontaneously identify and discriminate socially interactive behaviors between two human agents. Specifically, we investigated the importance of global body form, intrinsic joint movements, extrinsic whole-body movements, and critically, the congruency between intrinsic and extrinsic motions. Motion congruency is hypothesized to be particularly important because of the constraint it imposes on naturalistic action due to the inherent causal relationship between limb movements and whole body motion. Using a free response paradigm in Experiment 1, we discovered that many naïve observers (55%) spontaneously attributed animate and/or social traits to spatially-scrambled displays of interpersonal interaction. Total stimulus motion energy was strongly correlated with the likelihood that an observer would attribute animate/social traits, as opposed to physical/mechanical traits, to the scrambled dot stimuli. In Experiment 2, we found that participants could identify interactions between spatially-scrambled displays of human dance as long as congruency was maintained between intrinsic/extrinsic movements. Violating the motion congruency constraint resulted in chance discrimination performance for the spatially-scrambled displays. Finally, Experiment 3 showed that scrambled point-light dancing animations violating this constraint were also rated as significantly less interactive than animations with congruent intrinsic/extrinsic motion. These results demonstrate the importance of intrinsic/extrinsic motion congruency for biological motion analysis, and support a theoretical framework in which early visual filters help to detect animate agents in the environment based on several fundamental constraints. Only after satisfying these basic constraints could stimuli be evaluated for high-level social content. In this way, we posit that perceptual animacy may serve as a gateway to higher-level processes that support action understanding and social inference.


Subject(s)
Interpersonal Relations , Movement , Social Perception , Female , Humans , Male , Young Adult
14.
Front Hum Neurosci ; 8: 91, 2014.
Article in English | MEDLINE | ID: mdl-24605096

ABSTRACT

Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.

15.
Psychol Sci ; 24(7): 1133-41, 2013 Jul 01.
Article in English | MEDLINE | ID: mdl-23670885

ABSTRACT

Point-light animations of biological motion are perceived quickly and spontaneously, giving rise to an irresistible sensation of animacy. However, the mechanisms that support judgments of animacy based on biological motion remain unclear. The current study demonstrates that animacy ratings increase when a spatially scrambled animation of human walking maintains consistency with two fundamental constraints: the direction of gravity and congruency between the directions of intrinsic and extrinsic motion. Furthermore, using a reverse-correlation method, we show that observers employ structural templates, or form-based "priors," reflecting the prototypical mammalian body plan when attributing animacy to scrambled human forms. These findings reveal that perception of animacy in scrambled biological motion involves not only analysis of local intrinsic motion, but also its congruency with global extrinsic motion and global spatial structure. Thus, they suggest a strong influence of prior knowledge about characteristic features of creatures in the natural environment.


Subject(s)
Form Perception/physiology , Motion Perception/physiology , Pattern Recognition, Visual/physiology , Social Perception , Adolescent , Adult , Discrimination, Psychological , Female , Humans , Male , Photic Stimulation , Visual Perception , Walking , Young Adult
16.
J Vis ; 13(2): 8, 2013 Feb 06.
Article in English | MEDLINE | ID: mdl-23390322

ABSTRACT

Human observers are adept at perceiving complex actions in point-light biological motion displays that represent the human form with a sparse array of moving points. However, the neural computations supporting action perception remain unclear, particularly with regards to central versus peripheral vision. We created novel action stimuli comprised of Gabor patches to examine the contributions of various competing visual cues to action perception across the visual field. The Gabor action stimulus made it possible to pin down form processing at two levels: (a) local information about limb angle represented by Gabor orientations and (b) global body structure signaled by the spatial arrangement of Gabor patches. This stimulus also introduced two types of motion signals: (a) local velocity represented by Gabor drifting motion and (b) joint motion trajectories signaled by position changes of Gabor disks over time. In central vision, the computational analysis of global cues based on the spatial arrangement of joints and joint trajectories dominated processing, with minimal influence of local drifting motion and orientation cues. In the periphery we found that local drifting motion and orientation cues interacted with spatial cues in sophisticated ways depending on the particular discrimination task and location within the visual field to influence action perception. This dissociation was evident in several experiments showing phantom action percepts in the periphery that contradicted central vision. Our findings suggest a highly flexible and adaptive system for processing visual cues at multiple levels for biological motion and action perception.


Subject(s)
Cues , Discrimination, Psychological/physiology , Form Perception/physiology , Motion Perception/physiology , Orientation/physiology , Space Perception/physiology , Visual Fields , Humans , Light , Male , Motion , Photic Stimulation , Psychophysics
17.
Atten Percept Psychophys ; 73(2): 572-80, 2011 Feb.
Article in English | MEDLINE | ID: mdl-21264736

ABSTRACT

Humans extract visual information from the world through spatial frequency (SF) channels that are sensitive to different scales of light-dark fluctuations across visual space. Using two methods, we measured human SF tuning for discriminating videos of human actions (walking, running, skipping and jumping). The first, more traditional, approach measured signal-to-noise ratio (s/n) thresholds for videos filtered by one of six Gaussian band-pass filters ranging from 4 to 128 cycles/image. The second approach used SF "bubbles", Willenbockel et al. (Journal of Experimental Psychology. Human Perception and Performance, 36(1), 122-135, 2010), which randomly filters the entire SF domain on each trial and uses reverse correlation to estimate SF tuning. Results from both methods were consistent and revealed a diagnostic SF band centered between 12-16 cycles/image (about 1-1.25 cycles/body width). Efficiency on this task was estimated by comparing s/n thresholds for humans to an ideal observer, and was estimated to be quite low (>.04%) for both experiments.


Subject(s)
Contrast Sensitivity , Discrimination, Psychological , Motion Perception , Sensory Thresholds , Fourier Analysis , Humans , Normal Distribution , Psychophysics , Video Recording
18.
J Vis ; 10(12): 15, 2010 Oct 01.
Article in English | MEDLINE | ID: mdl-21047747

ABSTRACT

Among the most common events in our daily lives is seeing people in action. Scientists have accumulated evidence suggesting humans may have developed specialized mechanisms for recognizing these visual events. In the current experiments, we apply the "bubbles" technique to construct space-time classification movies that reveal the key features human observers use to discriminate biological motion stimuli (point-light and stick figure walkers). We find that observers rely on similar features for both types of stimuli, namely, form information in the upper body and dynamic information in the relative motion of the limbs. To measure the contributions of motion and form analyses in this task, we computed classification movies from the responses of a biologically plausible model that can discriminate biological motion patterns (M. A. Giese & T. Poggio, 2003). The model classification movies reveal similar key features to observers, with the model's motion and form pathways each capturing unique aspects of human performance. In a second experiment, we computed classification movies derived from trials of varying exposure times (67-267 ms) and demonstrate the transition to form-based strategies as motion information becomes less available. Overall, these results highlight the relative contributions of motion and form computations to biological motion perception.


Subject(s)
Models, Neurological , Motion Perception/physiology , Movement , Pattern Recognition, Visual/physiology , Psychophysics , Female , Humans , Male , Photic Stimulation/methods , Walking
19.
J Vis ; 8(3): 28.1-11, 2008 Mar 27.
Article in English | MEDLINE | ID: mdl-18484834

ABSTRACT

Humans are remarkably good at recognizing biological motion, even when depicted as point-light animations. There is currently some debate as to the relative importance of form and motion cues in the perception of biological motion from the simple dot displays. To investigate this issue, we adapted the "Bubbles" technique, most commonly used in face and object perception, to isolate the critical features for point-light biological motion perception. We find that observer sensitivity waxes and wanes during the course of an action, with peak discrimination performance most strongly correlated with moments of local opponent motion of the extremities. When dynamic cues are removed, instances that are most perceptually salient become the least salient, evidence that the strategies employed during point-light biological motion perception are not effective for recognizing human actions from static patterns. We conclude that local motion features, not global form templates, are most critical for perceiving point-light biological motion. These experiments also present a useful technique for identifying key features of dynamic events.


Subject(s)
Form Perception/physiology , Motion Perception/physiology , Pattern Recognition, Visual/physiology , Humans , Photic Stimulation , Psychophysics/methods , Sensory Thresholds
SELECTION OF CITATIONS
SEARCH DETAIL
...