Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
J Neurosci ; 44(26)2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38740441

ABSTRACT

Humans make decisions about food every day. The visual system provides important information that forms a basis for these food decisions. Although previous research has focused on visual object and category representations in the brain, it is still unclear how visually presented food is encoded by the brain. Here, we investigate the time-course of food representations in the brain. We used time-resolved multivariate analyses of electroencephalography (EEG) data, obtained from human participants (both sexes), to determine which food features are represented in the brain and whether focused attention is needed for this. We recorded EEG while participants engaged in two different tasks. In one task, the stimuli were task relevant, whereas in the other task, the stimuli were not task relevant. Our findings indicate that the brain can differentiate between food and nonfood items from ∼112 ms after the stimulus onset. The neural signal at later latencies contained information about food naturalness, how much the food was transformed, as well as the perceived caloric content. This information was present regardless of the task. Information about whether food is immediately ready to eat, however, was only present when the food was task relevant and presented at a slow presentation rate. Furthermore, the recorded brain activity correlated with the behavioral responses in an odd-item-out task. The fast representation of these food features, along with the finding that this information is used to guide food categorization decision-making, suggests that these features are important dimensions along which the representation of foods is organized.


Subject(s)
Brain , Electroencephalography , Food , Photic Stimulation , Humans , Male , Female , Brain/physiology , Adult , Electroencephalography/methods , Young Adult , Photic Stimulation/methods , Reaction Time/physiology , Time Factors , Attention/physiology , Decision Making/physiology
2.
Neuroimage ; 278: 120269, 2023 09.
Article in English | MEDLINE | ID: mdl-37423272

ABSTRACT

Simulation theories propose that vicarious touch arises when seeing someone else being touched triggers corresponding representations of being touched. Prior electroencephalography (EEG) findings show that seeing touch modulates both early and late somatosensory responses (measured with or without direct tactile stimulation). Functional Magnetic Resonance Imaging (fMRI) studies have shown that seeing touch increases somatosensory cortical activation. These findings have been taken to suggest that when we see someone being touched, we simulate that touch in our sensory systems. The somatosensory overlap when seeing and feeling touch differs between individuals, potentially underpinning variation in vicarious touch experiences. Increases in amplitude (EEG) or cerebral blood flow response (fMRI), however, are limited in that they cannot test for the information contained in the neural signal: seeing touch may not activate the same information as feeling touch. Here, we use time-resolved multivariate pattern analysis on whole-brain EEG data from people with and without vicarious touch experiences to test whether seen touch evokes overlapping neural representations with the first-hand experience of touch. Participants felt touch to the fingers (tactile trials) or watched carefully matched videos of touch to another person's fingers (visual trials). In both groups, EEG was sufficiently sensitive to allow decoding of touch location (little finger vs. thumb) on tactile trials. However, only in individuals who reported feeling touch when watching videos of touch could a classifier trained on tactile trials distinguish touch location on visual trials. This demonstrates that, for people who experience vicarious touch, there is overlap in the information about touch location held in the neural patterns when seeing and feeling touch. The timecourse of this overlap implies that seeing touch evokes similar representations to later stages of tactile processing. Therefore, while simulation may underlie vicarious tactile sensations, our findings suggest this involves an abstracted representation of directly felt touch.


Subject(s)
Touch Perception , Touch , Humans , Touch/physiology , Touch Perception/physiology , Somatosensory Cortex/physiology , Emotions , Brain
3.
Cortex ; 153: 66-86, 2022 08.
Article in English | MEDLINE | ID: mdl-35597052

ABSTRACT

Objects disappearing briefly from sight due to occlusion is an inevitable occurrence in everyday life. Yet we generally have a strong experience that occluded objects continue to exist, despite the fact that they objectively disappear. This indicates that neural object representations must be maintained during dynamic occlusion. However, it is unclear what the nature of such representation is and in particular whether it is perception-like or more abstract, for example, reflecting limited features such as position or movement direction only. In this study, we address this question by examining how different object features such as object shape, luminance, and position are represented in the brain when a moving object is dynamically occluded. We apply multivariate decoding methods to Magnetoencephalography (MEG) data to track how object representations unfold over time. Our methods allow us to contrast the representations of multiple object features during occlusion and enable us to compare the neural responses evoked by visible and occluded objects. The results show that object position information is represented during occlusion to a limited extent while object identity features are not maintained through the period of occlusion. Together, this suggests that the nature of object representations during dynamic occlusion is different from visual representations during perception.


Subject(s)
Visual Cortex , Brain , Brain Mapping , Humans , Magnetic Resonance Imaging/methods , Magnetoencephalography , Pattern Recognition, Visual/physiology , Visual Cortex/physiology
4.
Sci Rep ; 12(1): 6968, 2022 04 28.
Article in English | MEDLINE | ID: mdl-35484363

ABSTRACT

Selective attention prioritises relevant information amongst competing sensory input. Time-resolved electrophysiological studies have shown stronger representation of attended compared to unattended stimuli, which has been interpreted as an effect of attention on information coding. However, because attention is often manipulated by making only the attended stimulus a target to be remembered and/or responded to, many reported attention effects have been confounded with target-related processes such as visual short-term memory or decision-making. In addition, attention effects could be influenced by temporal expectation about when something is likely to happen. The aim of this study was to investigate the dynamic effect of attention on visual processing using multivariate pattern analysis of electroencephalography (EEG) data, while (1) controlling for target-related confounds, and (2) directly investigating the influence of temporal expectation. Participants viewed rapid sequences of overlaid oriented grating pairs while detecting a "target" grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored (cued by colour), and temporal expectation, with stimulus onset timing either predictable or not. We controlled for target-related processing confounds by only analysing non-target trials. Both attended and ignored gratings were initially coded equally in the pattern of responses across EEG sensors. An effect of attention, with preferential coding of the attended stimulus, emerged approximately 230 ms after stimulus onset. This attention effect occurred even when controlling for target-related processing confounds, and regardless of stimulus onset expectation. These results provide insight into the effect of feature-based attention on the dynamic processing of competing visual information.


Subject(s)
Attention , Motivation , Attention/physiology , Cues , Electroencephalography , Humans , Visual Perception/physiology
5.
Proc Natl Acad Sci U S A ; 118(6)2021 02 09.
Article in English | MEDLINE | ID: mdl-33526693

ABSTRACT

Grapheme-color synesthetes experience color when seeing achromatic symbols. We examined whether similar neural mechanisms underlie color perception and synesthetic colors using magnetoencephalography. Classification models trained on neural activity from viewing colored stimuli could distinguish synesthetic color evoked by achromatic symbols after a delay of ∼100 ms. Our results provide an objective neural signature for synesthetic experience and temporal evidence consistent with higher-level processing in synesthesia.


Subject(s)
Color Perception/physiology , Pattern Recognition, Visual/physiology , Synesthesia/physiopathology , Adolescent , Adult , Aged , Female , Humans , Magnetoencephalography , Male , Middle Aged , Photic Stimulation , Reaction Time/physiology , Synesthesia/diagnostic imaging , Young Adult
6.
J Speech Lang Hear Res ; 63(7): 2361-2385, 2020 07 20.
Article in English | MEDLINE | ID: mdl-32640176

ABSTRACT

Purpose We aimed to develop a noninvasive neural test of language comprehension to use with nonspeaking children for whom standard behavioral testing is unreliable (e.g., minimally verbal autism). Our aims were threefold. First, we sought to establish the sensitivity of two auditory paradigms to elicit neural responses in individual neurotypical children. Second, we aimed to validate the use of a portable and accessible electroencephalography (EEG) system, by comparing its recordings to those of a research-grade system. Third, in light of substantial interindividual variability in individuals' neural responses, we assessed whether multivariate decoding methods could improve sensitivity. Method We tested the sensitivity of two child-friendly covert N400 paradigms. Thirty-one typically developing children listened to identical spoken words that were either strongly predicted by the preceding context or violated lexical-semantic expectations. Context was given by a cue word (Experiment 1) or sentence frame (Experiment 2), and participants either made an overall judgment on word relatedness or counted lexical-semantic violations. We measured EEG concurrently from a research-grade system, Neuroscan's SynAmps2, and an adapted gaming system, Emotiv's EPOC+. Results We found substantial interindividual variability in the timing and topology of N400-like effects. For both paradigms and EEG systems, traditional N400 effects at the expected sensors and time points were statistically significant in around 50% of individuals. Using multivariate analyses, detection rate increased to 88% of individuals for the research-grade system in the sentences paradigm, illustrating the robustness of this method in the face of interindividual variations in topography. Conclusions There was large interindividual variability in neural responses, suggesting interindividual variation in either the cognitive response to lexical-semantic violations and/or the neural substrate of that response. Around half of our neurotypical participants showed the expected N400 effect at the expected location and time points. A low-cost, accessible EEG system provided comparable data for univariate analysis but was not well suited to multivariate decoding. However, multivariate analyses with a research-grade EEG system increased our detection rate to 88% of individuals. This approach provides a strong foundation to establish a neural index of language comprehension in children with limited communication. Supplemental Material https://doi.org/10.23641/asha.12606311.


Subject(s)
Electroencephalography , Language , Child , Comprehension , Evoked Potentials , Female , Humans , Male , Semantics
7.
Atten Percept Psychophys ; 81(8): 2873-2880, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31165455

ABSTRACT

Holistic processing, demonstrated by a failure of selective attention to individual parts within stimuli, is often considered a relatively unique feature of the processing of faces and objects of expertise. However, face-like holistic processing has been recently demonstrated for novel line stimuli with salient Gestalt perceptual grouping cues. Further, disrupting such cues within face stimuli disrupts holistic face perception. There is evidence that holistic processing of these gestalt stimuli and faces does not overlap mechanistically in the same way as does the processing of faces and objects of expertise. However, the relationship between these different manifestations of holistic processing is unclear. We developed a task to probe whether a holistic processing-specific overlap occurs at an earlier, perceptual level between the mechanisms supporting processing of faces and strong gestalt stimuli. Faces and gestalt line stimuli were overlaid, and participants made part judgments about either the faces (Experiment 1) or line stimuli (Experiment 2) in a composite task indexing holistic perception. The data revealed evidence of reciprocal interference between holistic processing of line and face stimuli, with indices of holistic processing of face and line stimuli reduced when the overlaid stimuli were also processed holistically (e.g., intact line/face stimuli) compared with when the overlaid stimuli did not commandeer holistic processing resources (e.g., misaligned line/face stimuli). This pattern is consistent with a mechanistic overlap between the holistic perception of faces and gestalt stimuli. Our results support a dual-stimulus-based and experienced-based-pathway model of holistic processing, with face stimuli using both.


Subject(s)
Facial Recognition/physiology , Judgment/physiology , Photic Stimulation/methods , Adolescent , Adult , Attention/physiology , Female , Gestalt Theory , Holistic Health , Humans , Male , Young Adult
8.
Atten Percept Psychophys ; 81(5): 1283-1296, 2019 Jul.
Article in English | MEDLINE | ID: mdl-30825115

ABSTRACT

Radiologists make critical decisions based on searching and interpreting medical images. The probability of a lung nodule differs across anatomical regions within the chest, raising the possibility that radiologists might have a prior expectation that creates an attentional bias. The development of expertise is also thought to cause "tuning" to relevant features, allowing radiologists to become faster and more accurate at detecting potential masses within their domain of expertise. Here, we tested both radiologists and control participants with a novel attentional-cueing paradigm to investigate whether the deployment of attention was affected (1) by a context that might invoke prior knowledge for experts, (2) by a nodule localized either on the same or on opposite sides as a subsequent target, and (3) by inversion of the nodule-present chest radiographs, to assess the orientation specificity of any effects. The participants also performed a nodule detection task to verify that our presentation duration was sufficient to extract diagnostic information. We saw no evidence of priors triggered by a normal chest radiograph cue affecting attention. When the cue was an upright abnormal chest radiograph, radiologists were faster when the lateralised nodule and the subsequent target appeared at the same rather than at opposite locations, suggesting attention was captured by the nodule. The opposite pattern was present for inverted images. We saw no evidence of cueing for control participants in any condition, which suggests that radiologists are indeed more sensitive to visual features that are not perceived as salient by naïve observers.


Subject(s)
Attention , Clinical Competence , Lung Neoplasms/diagnostic imaging , Radiography/psychology , Radiologists/psychology , Adult , Cues , Female , Humans , Male , Middle Aged , Orientation , Recognition, Psychology , Sensitivity and Specificity
9.
Br J Psychol ; 110(2): 428-448, 2019 May.
Article in English | MEDLINE | ID: mdl-30006984

ABSTRACT

Previous research has identified numerous factors affecting the capacity and accuracy of visual working memory (VWM). One potentially important factor is the emotionality of the stimuli to be encoded and held in VWM. We often must hold in VWM information that is emotionally charged, but much is still unknown about how the emotionality of stimuli impacts VWM performance. In the current research, we performed four studies examining the impact of fearful facial expressions on VWM for faces. Fearful expressions were found to produce a consistent cost to VWM performance. This cost was modulated by encoding time, but not set size. This cost was only present for faces in an upright orientation consistent with this cost being a product of the emotionality of the faces rather than lower-level perceptual differences between neutral and fearful faces. These findings are discussed in the context of existing theoretical accounts of the impact of emotion on information processing. We suggest that a number of competing effects drive both costs and benefits and are at play when emotional information must be stored in VWM, with the task context determining the balance between them.


Subject(s)
Facial Expression , Facial Recognition/physiology , Fear/physiology , Memory, Short-Term/physiology , Social Perception , Adult , Female , Humans , Male , Young Adult
10.
Atten Percept Psychophys ; 81(3): 716-726, 2019 Apr.
Article in English | MEDLINE | ID: mdl-30569435

ABSTRACT

Holistic processing is often considered to be limited to faces and non-face objects of expertise, with previous studies revealing a specific mechanistic overlap between the holistic processing of these stimuli. However, more recently holistic processing has been demonstrated for untrained, novel stimuli containing salient Gestalt perceptual grouping cues. The relationship between the holistic processing of these novel stimuli and of faces is unclear. Here we examine whether there is a mechanistic overlap between the holistic processing of these two stimulus categories. To do this we used the same two-back interleaved part-matching task previously used to examine the mechanistic overlap between the processing of faces and of objects of expertise. Concurrent holistic processing of these salient Gestalt stimuli did not impact (Experiment 1), nor was it impacted by (Experiment 2), holistic processing of face stimuli. This suggests that the nature of the overlap between holistic processing of faces and of salient Gestalt stimuli may be distinct from that between objects of expertise and faces. We discuss potential mechanistic accounts of this difference.


Subject(s)
Face , Gestalt Theory , Visual Perception , Adult , Cues , Facial Recognition , Female , Humans , Male , Photic Stimulation
11.
J Vis ; 16(3): 36, 2016.
Article in English | MEDLINE | ID: mdl-26913628

ABSTRACT

Visual orientation discrimination is known to improve with extensive training, but the mechanisms underlying this behavioral benefit remain poorly understood. Here, we examine the possibility that more reliable task performance could arise in part because observers learn to sample information from a larger portion of the stimulus. We used a variant of the classification image method in combination with a global orientation discrimination task to test whether a change in information sampling underlies training-based benefits in behavioral performance. The results revealed that decreases in orientation thresholds with perceptual learning were accompanied by increases in stimulus sampling. In particular, while stimulus sampling was restricted to the parafoveal, inner portion of the stimulus before training, we observed an outward spread of sampling after training. These results demonstrate that the benefits of perceptual learning may arise, in part, from a strategic increase in the efficiency with which the observer samples information from a visual stimulus.


Subject(s)
Learning/physiology , Orientation , Visual Perception/physiology , Adult , Discrimination, Psychological , Female , Humans , Task Performance and Analysis , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...