Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters








Database
Language
Publication year range
1.
Nat Methods ; 21(7): 1316-1328, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38918605

ABSTRACT

Contemporary pose estimation methods enable precise measurements of behavior via supervised deep learning with hand-labeled video frames. Although effective in many cases, the supervised approach requires extensive labeling and often produces outputs that are unreliable for downstream analyses. Here, we introduce 'Lightning Pose', an efficient pose estimation package with three algorithmic contributions. First, in addition to training on a few labeled video frames, we use many unlabeled videos and penalize the network whenever its predictions violate motion continuity, multiple-view geometry and posture plausibility (semi-supervised learning). Second, we introduce a network architecture that resolves occlusions by predicting pose on any given frame using surrounding unlabeled frames. Third, we refine the pose predictions post hoc by combining ensembling and Kalman smoothing. Together, these components render pose trajectories more accurate and scientifically usable. We released a cloud application that allows users to label data, train networks and process new videos directly from the browser.


Subject(s)
Algorithms , Bayes Theorem , Video Recording , Animals , Video Recording/methods , Supervised Machine Learning , Cloud Computing , Software , Posture/physiology , Deep Learning , Image Processing, Computer-Assisted/methods , Behavior, Animal
2.
Behav Res Methods ; 56(4): 3452-3468, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38594442

ABSTRACT

Unconscious processing has been widely examined using diverse and well-controlled methodologies. However, the extent to which these findings are relevant to real-life instances of information processing without awareness is limited. Here, we present a novel inattentional blindness (IB) paradigm in virtual reality (VR). In three experiments, we managed to repeatedly induce IB while participants foveally viewed salient stimuli for prolonged durations. The effectiveness of this paradigm demonstrates the close relationship between top-down attention and subjective experience. Thus, this method provides an ecologically valid setup to examine processing without awareness.


Subject(s)
Attention , Awareness , Virtual Reality , Humans , Attention/physiology , Male , Female , Adult , Young Adult , Visual Perception/physiology , Photic Stimulation
3.
Cortex ; 173: 49-60, 2024 04.
Article in English | MEDLINE | ID: mdl-38367591

ABSTRACT

Despite its centrality to human experience, the functional role of conscious awareness is not yet known. One hypothesis suggests that consciousness is necessary for allowing high-level information to refine low-level processing in a "top-down" manner. To test this hypothesis, in this work we examined whether consciousness is needed for integrating contextual information with sensory information during visual object recognition, a case of top-down processing that is automatic and ubiquitous to our daily visual experience. In three experiments, 137 participants were asked to determine the identity of an ambiguous object presented to them. Crucially, a scene biasing the interpretation of the object towards one option over another (e.g., a picture of a tree when the object could equally be perceived as a fish or a leaf) was presented either before, after, or alongside the ambiguous object. In all three experiments, the scene biased perception of the ambiguous object when it was consciously perceived, but not when it was processed unconsciously. The results therefore suggest that conscious awareness may be needed for top-down contextual processes.


Subject(s)
Consciousness , Visual Perception , Humans , Awareness , Pattern Recognition, Visual , Photic Stimulation/methods
4.
bioRxiv ; 2024 Apr 03.
Article in English | MEDLINE | ID: mdl-37162966

ABSTRACT

Contemporary pose estimation methods enable precise measurements of behavior via supervised deep learning with hand-labeled video frames. Although effective in many cases, the supervised approach requires extensive labeling and often produces outputs that are unreliable for downstream analyses. Here, we introduce "Lightning Pose," an efficient pose estimation package with three algorithmic contributions. First, in addition to training on a few labeled video frames, we use many unlabeled videos and penalize the network whenever its predictions violate motion continuity, multiple-view geometry, and posture plausibility (semi-supervised learning). Second, we introduce a network architecture that resolves occlusions by predicting pose on any given frame using surrounding unlabeled frames. Third, we refine the pose predictions post-hoc by combining ensembling and Kalman smoothing. Together, these components render pose trajectories more accurate and scientifically usable. We release a cloud application that allows users to label data, train networks, and predict new videos directly from the browser.

5.
PLoS Comput Biol ; 17(9): e1009439, 2021 09.
Article in English | MEDLINE | ID: mdl-34550974

ABSTRACT

Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this tool by extracting interpretable behavioral features from videos of three different head-fixed mouse preparations, as well as a freely moving mouse in an open field arena, and show how these interpretable features can facilitate downstream behavioral and neural analyses. We also show how the behavioral features produced by our model improve the precision and interpretation of these downstream analyses compared to using the outputs of either fully supervised or fully unsupervised methods alone.


Subject(s)
Algorithms , Artificial Intelligence/statistics & numerical data , Behavior, Animal , Video Recording , Animals , Computational Biology , Computer Simulation , Markov Chains , Mice , Models, Statistical , Neural Networks, Computer , Supervised Machine Learning/statistics & numerical data , Unsupervised Machine Learning/statistics & numerical data , Video Recording/statistics & numerical data
6.
Psychol Sci ; 31(6): 663-677, 2020 06.
Article in English | MEDLINE | ID: mdl-32384011

ABSTRACT

Contextual effects require integration of top-down predictions and bottom-up visual information. Given the widely assumed link between integration and consciousness, we asked whether contextual effects require consciousness. In two experiments (total N = 60), an ambiguous stimulus (which could be read as either B or 13) was presented alongside masked numbers (12 and 14) or letters (A and C). Context biased stimulus classification when it was consciously and unconsciously perceived. However, unconsciously perceived contexts evoked smaller effects. This finding was replicated and generalized into another language in a further experiment (N = 46) using a different set of stimuli, strengthening the claim that symbolic contextual effects can occur without awareness. Moreover, four experiments (total N = 160) suggested that these unconscious effects might be limited to the categorical level (numbers context vs. letters context) and do not extend to the lexical level (words context vs. nonwords context). Taken together, our results suggest that although consciousness may not be necessary for effects that require simple integration or none at all, it is nevertheless required for integration over larger semantic windows.


Subject(s)
Consciousness/physiology , Language , Psychomotor Performance/physiology , Visual Perception/physiology , Adult , Electroencephalography , Female , Humans , Male , Perceptual Masking/physiology , Photic Stimulation , Semantics , Young Adult
7.
J Exp Psychol Hum Percept Perform ; 43(12): 1974-1992, 2017 Dec.
Article in English | MEDLINE | ID: mdl-28425733

ABSTRACT

Can observers maintain more than 1 attentional set and search for 2 features in parallel? Previous studies that relied on attentional capture by irrelevant distractors to answer this question focused on features from the same dimension and specifically, on color. They showed that 2 separate color templates can guide attention selectively and simultaneously. Here, the authors investigated attentional guidance by 2 features from different dimensions. In three spatial-cueing experiments, they compared contingent capture during single-set versus dual-set search. The results showed that attention was guided less efficiently by 2 features than by just 1. This impairment varied considerably across target-feature dimensions (color, size, shape and orientation). Confronted with previous studies, our findings suggest avenues for future research to determine whether impaired attentional guidance by multiple templates occurs only in cross-dimensional disjunctive search or also in within-dimension search. The present findings also showed that although performance improved when the target feature repeated on successive trials, a relevant-feature cue did not capture attention to a larger extent when its feature matched that of the previous target. These findings suggest that selection history cannot account for contingent capture and affects processes subsequent to target selection. (PsycINFO Database Record


Subject(s)
Attention/physiology , Color Perception/physiology , Cues , Form Perception/physiology , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Size Perception/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL