Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
Neuropsychologia ; 199: 108900, 2024 07 04.
Artículo en Inglés | MEDLINE | ID: mdl-38697558

RESUMEN

Whilst previous research has linked attenuation of the mu rhythm to the observation of specific visual categories, and even to a potential role in action observation via a putative mirror neuron system, much of this work has not considered what specific type of information might be coded in this oscillatory response when triggered via vision. Here, we sought to determine whether the mu rhythm contains content-specific information about the identity of familiar (and also unfamiliar) graspable objects. In the present study, right-handed participants (N = 27) viewed images of both familiar (apple, wine glass) and unfamiliar (cubie, smoothie) graspable objects, whilst performing an orthogonal task at fixation. Multivariate pattern analysis (MVPA) revealed significant decoding of familiar, but not unfamiliar, visual object categories in the mu rhythm response. Thus, simply viewing familiar graspable objects may automatically trigger activation of associated tactile and/or motor properties in sensorimotor areas, reflected in the mu rhythm. In addition, we report significant attenuation in the central beta band for both familiar and unfamiliar visual objects, but not in the mu rhythm. Our findings highlight how analysing two different aspects of the oscillatory response - either attenuation or the representation of information content - provide complementary views on the role of the mu rhythm in response to viewing graspable object categories.


Asunto(s)
Reconocimiento en Psicología , Humanos , Masculino , Femenino , Adulto Joven , Adulto , Reconocimiento en Psicología/fisiología , Ondas Encefálicas/fisiología , Electroencefalografía , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa
2.
Sci Rep ; 14(1): 9402, 2024 04 24.
Artículo en Inglés | MEDLINE | ID: mdl-38658575

RESUMEN

Perceptual decisions are derived from the combination of priors and sensorial input. While priors are broadly understood to reflect experience/expertise developed over one's lifetime, the role of perceptual expertise at the individual level has seldom been directly explored. Here, we manipulate probabilistic information associated with a high and low expertise category (faces and cars respectively), while assessing individual level of expertise with each category. 67 participants learned the probabilistic association between a color cue and each target category (face/car) in a behavioural categorization task. Neural activity (EEG) was then recorded in a similar paradigm in the same participants featuring the previously learned contingencies without the explicit task. Behaviourally, perception of the higher expertise category (faces) was modulated by expectation. Specifically, we observed facilitatory and interference effects when targets were correctly or incorrectly expected, which were also associated with independently measured individual levels of face expertise. Multivariate pattern analysis of the EEG signal revealed clear effects of expectation from 100 ms post stimulus, with significant decoding of the neural response to expected vs. not stimuli, when viewing identical images. Latency of peak decoding when participants saw faces was directly associated with individual level facilitation effects in the behavioural task. The current results not only provide time sensitive evidence of expectation effects on early perception but highlight the role of higher-level expertise on forming priors.


Asunto(s)
Electroencefalografía , Reconocimiento Facial , Humanos , Masculino , Femenino , Adulto , Reconocimiento Facial/fisiología , Adulto Joven , Estimulación Luminosa , Tiempo de Reacción/fisiología , Percepción Visual/fisiología , Cara/fisiología
3.
Biology (Basel) ; 12(7)2023 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-37508451

RESUMEN

Neurons in the primary visual cortex (V1) receive sensory inputs that describe small, local regions of the visual scene and cortical feedback inputs from higher visual areas processing the global scene context. Investigating the spatial precision of this visual contextual modulation will contribute to our understanding of the functional role of cortical feedback inputs in perceptual computations. We used human functional magnetic resonance imaging (fMRI) to test the spatial precision of contextual feedback inputs to V1 during natural scene processing. We measured brain activity patterns in the stimulated regions of V1 and in regions that we blocked from direct feedforward input, receiving information only from non-feedforward (i.e., feedback and lateral) inputs. We measured the spatial precision of contextual feedback signals by generalising brain activity patterns across parametrically spatially displaced versions of identical images using an MVPA cross-classification approach. We found that fMRI activity patterns in cortical feedback signals predicted our scene-specific features in V1 with a precision of approximately 4 degrees. The stimulated regions of V1 carried more precise scene information than non-stimulated regions; however, these regions also contained information patterns that generalised up to 4 degrees. This result shows that contextual signals relating to the global scene are similarly fed back to V1 when feedforward inputs are either present or absent. Our results are in line with contextual feedback signals from extrastriate areas to V1, describing global scene information and contributing to perceptual computations such as the hierarchical representation of feature boundaries within natural scenes.

4.
Cortex ; 159: 299-312, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36669447

RESUMEN

Although humans are considered to be face experts, there is a well-established reliable variation in the degree to which neurotypical individuals are able to learn and recognise faces. While many behavioural studies have characterised these differences, studies that seek to relate the neuronal response to standardised behavioural measures of ability remain relatively scarce, particularly so for the time-resolved approaches and the early response to face stimuli. In the present study we make use of a relatively recent methodological advance, multi-variate pattern analysis (MVPA), to decode the time course of the neural response to faces compared to other object categories (inverted faces, objects). Importantly, for the first time, we directly relate metrics of this decoding assessed at the individual level to gold-standard measures of behavioural face processing ability assessed in an independent task. Thirty-nine participants completed the behavioural Cambridge Face Memory Test (CFMT), then viewed images of faces and houses (presented upright and inverted) while their neural activity was measured via electroencephalography. Significant decoding of both face orientation and face category were observed in all individual participants. Decoding of face orientation, a marker of more advanced face processing, was earlier and stronger in participants with higher levels of face expertise, while decoding of face category information was earlier but not stronger for individuals with greater face expertise. Taken together these results provide a marker of significant differences in the early neuronal response to faces from around 100 ms post stimulus as a function of behavioural expertise with faces.


Asunto(s)
Reconocimiento Facial , Humanos , Reconocimiento Facial/fisiología , Electroencefalografía , Aprendizaje , Orientación Espacial , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos
5.
Cereb Cortex ; 33(7): 3621-3635, 2023 03 21.
Artículo en Inglés | MEDLINE | ID: mdl-36045002

RESUMEN

Neurons, even in the earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and, in some cases, discriminate stimuli that are not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand-object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging experiment, participants listened attentively to sounds from 3 categories: hand-object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multivoxel pattern analysis revealed significant decoding of hand-object interaction sounds within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand-object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand-object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich contextual information that can be transmitted across sensory modalities even to primary sensory areas.


Asunto(s)
Mano , Corteza Somatosensorial , Animales , Corteza Somatosensorial/diagnóstico por imagen , Corteza Somatosensorial/fisiología , Tacto/fisiología , Neuronas/fisiología , Imagen por Resonancia Magnética , Mapeo Encefálico
6.
Sci Rep ; 12(1): 9042, 2022 06 05.
Artículo en Inglés | MEDLINE | ID: mdl-35662252

RESUMEN

Intelligent manipulation of handheld tools marks a major discontinuity between humans and our closest ancestors. Here we identified neural representations about how tools are typically manipulated within left anterior temporal cortex, by shifting a searchlight classifier through whole-brain real action fMRI data when participants grasped 3D-printed tools in ways considered typical for use (i.e., by their handle). These neural representations were automatically evocated as task performance did not require semantic processing. In fact, findings from a behavioural motion-capture experiment confirmed that actions with tools (relative to non-tool) incurred additional processing costs, as would be suspected if semantic areas are being automatically engaged. These results substantiate theories of semantic cognition that claim the anterior temporal cortex combines sensorimotor and semantic content for advanced behaviours like tool manipulation.


Asunto(s)
Mapeo Encefálico , Imagen por Resonancia Magnética , Mapeo Encefálico/métodos , Humanos , Imagen por Resonancia Magnética/métodos , Análisis Multivariante , Semántica , Lóbulo Temporal/diagnóstico por imagen
7.
Sci Rep ; 11(1): 14357, 2021 07 13.
Artículo en Inglés | MEDLINE | ID: mdl-34257357

RESUMEN

Studies on low-level visual information underlying pain categorization have led to inconsistent findings. Some show an advantage for low spatial frequency information (SFs) and others a preponderance of mid SFs. This study aims to clarify this gap in knowledge since these results have different theoretical and practical implications, such as how far away an observer can be in order to categorize pain. This study addresses this question by using two complementary methods: a data-driven method without a priori expectations about the most useful SFs for pain recognition and a more ecological method that simulates the distance of stimuli presentation. We reveal a broad range of important SFs for pain recognition starting from low to relatively high SFs and showed that performance is optimal in a short to medium distance (1.2-4.8 m) but declines significantly when mid SFs are no longer available. This study reconciles previous results that show an advantage of LSFs over HSFs when using arbitrary cutoffs, but above all reveal the prominent role of mid-SFs for pain recognition across two complementary experimental tasks.


Asunto(s)
Emociones , Expresión Facial , Dolor Facial/clasificación , Dolor Facial/diagnóstico , Reconocimiento Visual de Modelos , Psicofísica/métodos , Adolescente , Adulto , Percepción de Distancia , Cara , Reconocimiento Facial , Femenino , Humanos , Conocimiento , Masculino , Distribución Normal , Reconocimiento en Psicología , Reproducibilidad de los Resultados , Adulto Joven
8.
J Neurosci ; 41(24): 5263-5273, 2021 06 16.
Artículo en Inglés | MEDLINE | ID: mdl-33972399

RESUMEN

Most neuroimaging experiments that investigate how tools and their actions are represented in the brain use visual paradigms where tools or hands are displayed as 2D images and no real movements are performed. These studies discovered selective visual responses in occipitotemporal and parietal cortices for viewing pictures of hands or tools, which are assumed to reflect action processing, but this has rarely been directly investigated. Here, we examined the responses of independently visually defined category-selective brain areas when participants grasped 3D tools (N = 20; 9 females). Using real-action fMRI and multivoxel pattern analysis, we found that grasp typicality representations (i.e., whether a tool is grasped appropriately for use) were decodable from hand-selective areas in occipitotemporal and parietal cortices, but not from tool-, object-, or body-selective areas, even if partially overlapping. Importantly, these effects were exclusive for actions with tools, but not for biomechanically matched actions with control nontools. In addition, grasp typicality decoding was significantly higher in hand than tool-selective parietal regions. Notably, grasp typicality representations were automatically evoked even when there was no requirement for tool use and participants were naive to object category (tool vs nontools). Finding a specificity for typical tool grasping in hand-selective, rather than tool-selective, regions challenges the long-standing assumption that activation for viewing tool images reflects sensorimotor processing linked to tool manipulation. Instead, our results show that typicality representations for tool grasping are automatically evoked in visual regions specialized for representing the human hand, the primary tool of the brain for interacting with the world.


Asunto(s)
Mapeo Encefálico/métodos , Mano/fisiología , Imagenología Tridimensional/métodos , Desempeño Psicomotor/fisiología , Adolescente , Adulto , Encéfalo/fisiología , Femenino , Fuerza de la Mano/fisiología , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
9.
Neuropsychologia ; 142: 107440, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-32179101

RESUMEN

Face recognition ability is often reported to be a relative strength in Williams syndrome (WS). Yet methodological issues associated with the supporting research, and evidence that atypical face processing mechanisms may drive outcomes 'in the typical range', challenge these simplistic characterisations of this important social ability. Detailed investigations of face processing abilities in WS both at a behavioural and neural level provide critical insights. Here, we behaviourally characterised face recognition ability in 18 individuals with WS comparatively to typically developing children and adult control groups. A subset of 11 participants with WS as well as chronologically age matched typical adults further took part in an EEG task where they were asked to attentively view a series of upright and inverted faces and houses. State-of-the-art multivariate pattern analysis (MVPA) was used alongside standard ERP analysis to obtain a detailed characterisation of the neural profile associated with 1) viewing faces as an overall category (by examining neural activity associated with upright faces and houses), and to 2) the canonical upright configuration of a face, critically associated with expertise in typical development and often linked with holistic processing (upright and inverted faces). Our results show that while face recognition ability is not on average at a chronological age-appropriate level in individuals with WS, it nonetheless appears to be a relative strength within their cognitive profile. Furthermore, all participants with WS revealed a differential pattern of neural activity to faces compared to objects, showing a distinct response to faces as a category, as well as a differential neural pattern for upright vs. inverted faces. Nonetheless, an atypical profile of face orientation classification was found in WS, suggesting that this group differs from typical individuals in their face processing mechanisms. Through this innovative application of MVPA, alongside the high temporal resolution of EEG, we provide important new insights into the neural processing of faces in WS.


Asunto(s)
Reconocimiento Facial , Síndrome de Williams , Adulto , Niño , Electroencefalografía , Potenciales Evocados , Humanos , Orientación , Orientación Espacial , Reconocimiento Visual de Modelos , Estimulación Luminosa
10.
Neuroimage ; 211: 116660, 2020 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-32081784

RESUMEN

Rapidly and accurately processing information from faces is a critical human function that is known to improve with developmental age. Understanding the underlying drivers of this improvement remains a contentious question, with debate continuing as to the presence of early vs. late maturation of face-processing mechanisms. Recent behavioural evidence suggests an important 'hallmark' of expert face processing - the face inversion effect - is present in very young children, yet neural support for this remains unclear. To address this, we conducted a detailed investigation of the neural dynamics of face processing in children spanning a range of ages (6-11 years) and adults. Uniquely, we applied multivariate pattern analysis (MVPA) to the electroencephalogram signal (EEG) to test for the presence of a distinct neural profile associated with canonical upright faces when compared both to other objects (houses) and to inverted faces. Results revealed robust discrimination profiles, at the individual level, of differentiated neural activity associated with broad face categorization and further with its expert processing, as indexed by the face inversion effect, from the youngest ages tested. This result is consistent with an early functional maturation of broad face processing mechanisms. Yet, clear quantitative differences between the response profile of children and adults is suggestive of age-related refinement of this system with developing face and general expertise. Standard ERP analysis also provides some support for qualitative differences in the neural response to inverted faces in children in contrast to adults. This neural profile is in line with recent behavioural studies that have reported impressively expert early face abilities during childhood, while also providing novel evidence of the ongoing neural specialisation between child and adulthood.


Asunto(s)
Desarrollo Infantil/fisiología , Electroencefalografía/métodos , Potenciales Evocados/fisiología , Reconocimiento Facial/fisiología , Percepción Social , Adulto , Niño , Femenino , Humanos , Masculino , Adulto Joven
11.
Neuroimage ; 195: 261-271, 2019 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-30940611

RESUMEN

Faces transmit a wealth of important social signals. While previous studies have elucidated the network of cortical regions important for perception of facial expression, and the associated temporal components such as the P100, N170 and EPN, it is still unclear how task constraints may shape the representation of facial expression (or other face categories) in these networks. In the present experiment, we used Multivariate Pattern Analysis (MVPA) with EEG to investigate the neural information available across time about two important face categories (expression and identity) when those categories are either perceived under explicit (e.g. decoding facial expression category from the EEG when task is on expression) or incidental task contexts (e.g. decoding facial expression category from the EEG when task is on identity). Decoding of both face categories, across both task contexts, peaked in time-windows spanning 91-170 ms (across posterior electrodes). Peak decoding of expression, however, was not affected by task context whereas peak decoding of identity was significantly reduced under incidental processing conditions. In addition, errors in EEG decoding correlated with errors in behavioral categorization under explicit processing for both expression and identity, however under incidental conditions only errors in EEG decoding of expression correlated with behavior. Furthermore, decoding time-courses and the spatial pattern of informative electrodes showed consistently better decoding of identity under explicit conditions at later-time periods, with weak evidence for similar effects for decoding of expression at isolated time-windows. Taken together, these results reveal differences and commonalities in the processing of face categories under explicit Vs incidental task contexts and suggest that facial expressions are processed to a richer degree under incidental processing conditions, consistent with prior work indicating the relative automaticity by which emotion is processed. Our work further demonstrates the utility in applying multivariate decoding analyses to EEG for revealing the dynamics of face perception.


Asunto(s)
Encéfalo/fisiología , Emociones , Expresión Facial , Reconocimiento Facial/fisiología , Adolescente , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Máquina de Vectores de Soporte , Adulto Joven
12.
PLoS One ; 13(5): e0197160, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29847562

RESUMEN

Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus.


Asunto(s)
Reconocimiento Facial/fisiología , Miedo , Felicidad , Visión Ocular/fisiología , Adulto , Expresión Facial , Femenino , Humanos , Masculino , Reconocimiento Visual de Modelos , Reconocimiento en Psicología
13.
Cortex ; 101: 31-43, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29414459

RESUMEN

A network of cortical and sub-cortical regions is known to be important in the processing of facial expression. However, to date no study has investigated whether representations of facial expressions present in this network permit generalization across independent samples of face information (e.g., eye region vs mouth region). We presented participants with partial face samples of five expression categories in a rapid event-related fMRI experiment. We reveal a network of face-sensitive regions that contain information about facial expression categories regardless of which part of the face is presented. We further reveal that the neural information present in a subset of these regions: dorsal prefrontal cortex (dPFC), superior temporal sulcus (STS), lateral occipital and ventral temporal cortex, and even early visual cortex, enables reliable generalization across independent visual inputs (faces depicting the 'eyes only' vs 'eyes removed'). Furthermore, classification performance was correlated to behavioral performance in STS and dPFC. Our results demonstrate that both higher (e.g., STS, dPFC) and lower level cortical regions contain information useful for facial expression decoding that go beyond the visual information presented, and implicate a key role for contextual mechanisms such as cortical feedback in facial expression perception under challenging conditions of visual occlusion.


Asunto(s)
Mapeo Encefálico , Cognición/fisiología , Emociones , Expresión Facial , Reconocimiento Facial/fisiología , Reconocimiento en Psicología/fisiología , Análisis de Varianza , Cara/fisiología , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Lineales , Imagen por Resonancia Magnética/métodos , Masculino , Pruebas de Estado Mental y Demencia , Lóbulo Temporal/fisiología , Corteza Visual/fisiología
14.
Curr Biol ; 25(20): 2690-5, 2015 Oct 19.
Artículo en Inglés | MEDLINE | ID: mdl-26441356

RESUMEN

Neuronal cortical circuitry comprises feedforward, lateral, and feedback projections, each of which terminates in distinct cortical layers [1-3]. In sensory systems, feedforward processing transmits signals from the external world into the cortex, whereas feedback pathways signal the brain's inference of the world [4-11]. However, the integration of feedforward, lateral, and feedback inputs within each cortical area impedes the investigation of feedback, and to date, no technique has isolated the feedback of visual scene information in distinct layers of healthy human cortex. We masked feedforward input to a region of V1 cortex and studied the remaining internal processing. Using high-resolution functional brain imaging (0.8 mm(3)) and multivoxel pattern information techniques, we demonstrate that during normal visual stimulation scene information peaks in mid-layers. Conversely, we found that contextual feedback information peaks in outer, superficial layers. Further, we found that shifting the position of the visual scene surrounding the mask parametrically modulates feedback in superficial layers of V1. Our results reveal the layered cortical organization of external versus internal visual processing streams during perception in healthy human subjects. We provide empirical support for theoretical feedback models such as predictive coding [10, 12] and coherent infomax [13] and reveal the potential of high-resolution fMRI to access internal processing in sub-millimeter human cortex.


Asunto(s)
Retroalimentación Fisiológica , Corteza Visual/fisiología , Vías Visuales , Humanos , Imagen por Resonancia Magnética , Estimulación Luminosa
15.
Cereb Cortex ; 25(4): 1020-31, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-24122136

RESUMEN

Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects.


Asunto(s)
Imagen por Resonancia Magnética/métodos , Reconocimiento Visual de Modelos/fisiología , Procesamiento de Señales Asistido por Computador , Corteza Somatosensorial/fisiología , Mapeo Encefálico , Discriminación en Psicología/fisiología , Femenino , Humanos , Masculino , Análisis Multivariante , Pruebas Neuropsicológicas
16.
Curr Biol ; 24(11): 1256-62, 2014 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-24856208

RESUMEN

Human early visual cortex was traditionally thought to process simple visual features such as orientation, contrast, and spatial frequency via feedforward input from the lateral geniculate nucleus (e.g., [1]). However, the role of nonretinal influence on early visual cortex is so far insufficiently investigated despite much evidence that feedback connections greatly outnumber feedforward connections [2-5]. Here, we explored in five fMRI experiments how information originating from audition and imagery affects the brain activity patterns in early visual cortex in the absence of any feedforward visual stimulation. We show that category-specific information from both complex natural sounds and imagery can be read out from early visual cortex activity in blindfolded participants. The coding of nonretinal information in the activity patterns of early visual cortex is common across actual auditory perception and imagery and may be mediated by higher-level multisensory areas. Furthermore, this coding is robust to mild manipulations of attention and working memory but affected by orthogonal, cognitively demanding visuospatial processing. Crucially, the information fed down to early visual cortex is category specific and generalizes to sound exemplars of the same category, providing evidence for abstract information feedback rather than precise pictorial feedback. Our results suggest that early visual cortex receives nonretinal input from other brain areas when it is generated by auditory perception and/or imagery, and this input carries common abstract information. Our findings are compatible with feedback of predictive information to the earliest visual input level (e.g., [6]), in line with predictive coding models [7-10].


Asunto(s)
Percepción Auditiva , Corteza Visual/fisiología , Percepción Visual , Estimulación Acústica , Humanos , Imagen por Resonancia Magnética , Memoria a Corto Plazo , Estimulación Luminosa
17.
Behav Brain Sci ; 36(3): 221, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23663531

RESUMEN

Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models).


Asunto(s)
Atención/fisiología , Encéfalo/fisiología , Cognición/fisiología , Ciencia Cognitiva/tendencias , Percepción/fisiología , Humanos
18.
Eur J Neurosci ; 37(7): 1130-9, 2013 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-23373719

RESUMEN

Higher visual areas in the occipitotemporal cortex contain discrete regions for face processing, but it remains unclear if V1 is modulated by top-down influences during face discrimination, and if this is widespread throughout V1 or localized to retinotopic regions processing task-relevant facial features. Employing functional magnetic resonance imaging (fMRI), we mapped the cortical representation of two feature locations that modulate higher visual areas during categorical judgements - the eyes and mouth. Subjects were presented with happy and fearful faces, and we measured the fMRI signal of V1 regions processing the eyes and mouth whilst subjects engaged in gender and expression categorization tasks. In a univariate analysis, we used a region-of-interest-based general linear model approach to reveal changes in activation within these regions as a function of task. We then trained a linear pattern classifier to classify facial expression or gender on the basis of V1 data from 'eye' and 'mouth' regions, and from the remaining non-diagnostic V1 region. Using multivariate techniques, we show that V1 activity discriminates face categories both in local 'diagnostic' and widespread 'non-diagnostic' cortical subregions. This indicates that V1 might receive the processed outcome of complex facial feature analysis from other cortical (i.e. fusiform face area, occipital face area) or subcortical areas (amygdala).


Asunto(s)
Expresión Facial , Percepción de Forma , Corteza Visual/fisiología , Adulto , Mapeo Encefálico , Ojo , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Modelos Neurológicos , Boca
19.
J Neurosci ; 31(47): 17149-68, 2011 Nov 23.
Artículo en Inglés | MEDLINE | ID: mdl-22114283

RESUMEN

Our present understanding of the neural mechanisms and sensorimotor transformations that govern the planning of arm and eye movements predominantly come from invasive parieto-frontal neural recordings in nonhuman primates. While functional MRI (fMRI) has motivated investigations on much of these same issues in humans, the highly distributed and multiplexed organization of parieto-frontal neurons necessarily constrain the types of intention-related signals that can be detected with traditional fMRI analysis techniques. Here we employed multivoxel pattern analysis (MVPA), a multivariate technique sensitive to spatially distributed fMRI patterns, to provide a more detailed understanding of how hand and eye movement plans are coded in human parieto-frontal cortex. Subjects performed an event-related delayed movement task requiring that a reach or saccade be planned and executed toward one of two spatial target positions. We show with MVPA that, even in the absence of signal amplitude differences, the fMRI spatial activity patterns preceding movement onset are predictive of upcoming reaches and saccades and their intended directions. Within certain parieto-frontal regions we show that these predictive activity patterns reflect a similar spatial target representation for the hand and eye. Within some of the same regions, we further demonstrate that these preparatory spatial signals can be discriminated from nonspatial, effector-specific signals. In contrast to the largely graded effector- and direction-related planning responses found with fMRI subtraction methods, these results reveal considerable consensus with the parieto-frontal network organization suggested from primate neurophysiology and specifically show how predictive spatial and nonspatial movement information coexists within single human parieto-frontal areas.


Asunto(s)
Mapeo Encefálico/métodos , Lóbulo Frontal/fisiología , Movimiento/fisiología , Lóbulo Parietal/fisiología , Desempeño Psicomotor/fisiología , Movimientos Sacádicos/fisiología , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Adulto Joven
20.
Proc Natl Acad Sci U S A ; 107(46): 20099-103, 2010 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-21041652

RESUMEN

Even within the early sensory areas, the majority of the input to any given cortical neuron comes from other cortical neurons. To extend our knowledge of the contextual information that is transmitted by such lateral and feedback connections, we investigated how visually nonstimulated regions in primary visual cortex (V1) and visual area V2 are influenced by the surrounding context. We used functional magnetic resonance imaging (fMRI) and pattern-classification methods to show that the cortical representation of a nonstimulated quarter-field carries information that can discriminate the surrounding visual context. We show further that the activity patterns in these regions are significantly related to those observed with feed-forward stimulation and that these effects are driven primarily by V1. These results thus demonstrate that visual context strongly influences early visual areas even in the absence of differential feed-forward thalamic stimulation.


Asunto(s)
Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa , Corteza Visual/fisiología , Algoritmos , Mapeo Encefálico , Análisis Discriminante , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA