Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 63
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Nat Commun ; 15(1): 5531, 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38982092

RESUMEN

In everyday life, people need to respond appropriately to many types of emotional stimuli. Here, we investigate whether human occipital-temporal cortex (OTC) shows co-representation of the semantic category and affective content of visual stimuli. We also explore whether OTC transformation of semantic and affective features extracts information of value for guiding behavior. Participants viewed 1620 emotional natural images while functional magnetic resonance imaging data were acquired. Using voxel-wise modeling we show widespread tuning to semantic and affective image features across OTC. The top three principal components underlying OTC voxel-wise responses to image features encoded stimulus animacy, stimulus arousal and interactions of animacy with stimulus valence and arousal. At low to moderate dimensionality, OTC tuning patterns predicted behavioral responses linked to each image better than regressors directly based on image features. This is consistent with OTC representing stimulus semantic category and affective content in a manner suited to guiding behavior.


Asunto(s)
Emociones , Imagen por Resonancia Magnética , Lóbulo Occipital , Semántica , Lóbulo Temporal , Humanos , Femenino , Masculino , Imagen por Resonancia Magnética/métodos , Lóbulo Temporal/fisiología , Lóbulo Temporal/diagnóstico por imagen , Adulto , Lóbulo Occipital/fisiología , Lóbulo Occipital/diagnóstico por imagen , Adulto Joven , Emociones/fisiología , Mapeo Encefálico , Estimulación Luminosa , Afecto/fisiología , Nivel de Alerta/fisiología
2.
Commun Biol ; 7(1): 284, 2024 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-38454134

RESUMEN

Language comprehension involves integrating low-level sensory inputs into a hierarchy of increasingly high-level features. Prior work studied brain representations of different levels of the language hierarchy, but has not determined whether these brain representations are shared between written and spoken language. To address this issue, we analyze fMRI BOLD data that were recorded while participants read and listened to the same narratives in each modality. Levels of the language hierarchy are operationalized as timescales, where each timescale refers to a set of spectral components of a language stimulus. Voxelwise encoding models are used to determine where different timescales are represented across the cerebral cortex, for each modality separately. These models reveal that between the two modalities timescale representations are organized similarly across the cortical surface. Our results suggest that, after low-level sensory processing, language integration proceeds similarly regardless of stimulus modality.


Asunto(s)
Lenguaje , Lectura , Humanos , Corteza Cerebral/diagnóstico por imagen , Encéfalo , Mapeo Encefálico/métodos
4.
bioRxiv ; 2023 Dec 11.
Artículo en Inglés | MEDLINE | ID: mdl-37577530

RESUMEN

Language comprehension involves integrating low-level sensory inputs into a hierarchy of increasingly high-level features. Prior work studied brain representations of different levels of the language hierarchy, but has not determined whether these brain representations are shared between written and spoken language. To address this issue, we analyzed fMRI BOLD data recorded while participants read and listened to the same narratives in each modality. Levels of the language hierarchy were operationalized as timescales, where each timescale refers to a set of spectral components of a language stimulus. Voxelwise encoding models were used to determine where different timescales are represented across the cerebral cortex, for each modality separately. These models reveal that between the two modalities timescale representations are organized similarly across the cortical surface. Our results suggest that, after low-level sensory processing, language integration proceeds similarly regardless of stimulus modality.

5.
bioRxiv ; 2023 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-37503232

RESUMEN

Functional connectivity (FC) is the most popular method for recovering functional networks of brain areas with fMRI. However, because FC is defined as temporal correlations in brain activity, FC networks are confounded by noise and lack a precise functional role. To overcome these limitations, we developed model connectivity (MC). MC is defined as similarities in encoding model weights, which quantify reliable functional activity in terms of interpretable stimulus- or task-related features. To compare FC and MC, both methods were applied to a naturalistic story listening dataset. FC recovered spatially broad networks that are confounded by noise, and that lack a clear role during natural language comprehension. By contrast, MC recovered spatially localized networks that are robust to noise, and that represent distinct categories of semantic concepts. Thus, MC is a powerful data-driven approach for recovering and interpreting the functional networks that support complex cognitive processes.

6.
Nat Commun ; 14(1): 4309, 2023 07 18.
Artículo en Inglés | MEDLINE | ID: mdl-37463907

RESUMEN

Speech processing requires extracting meaning from acoustic patterns using a set of intermediate representations based on a dynamic segmentation of the speech stream. Using whole brain mapping obtained in fMRI, we investigate the locus of cortical phonemic processing not only for single phonemes but also for short combinations made of diphones and triphones. We find that phonemic processing areas are much larger than previously described: they include not only the classical areas in the dorsal superior temporal gyrus but also a larger region in the lateral temporal cortex where diphone features are best represented. These identified phonemic regions overlap with the lexical retrieval region, but we show that short word retrieval is not sufficient to explain the observed responses to diphones. Behavioral studies have shown that phonemic processing and lexical retrieval are intertwined. Here, we also have identified candidate regions within the speech cortical network where this joint processing occurs.


Asunto(s)
Percepción del Habla , Habla , Humanos , Habla/fisiología , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología , Encéfalo/fisiología , Percepción del Habla/fisiología , Mapeo Encefálico , Imagen por Resonancia Magnética , Corteza Cerebral/diagnóstico por imagen
7.
J Vis Exp ; (197)2023 07 14.
Artículo en Inglés | MEDLINE | ID: mdl-37522736

RESUMEN

Adaptive deep brain stimulation (aDBS) shows promise for improving treatment for neurological disorders such as Parkinson's disease (PD). aDBS uses symptom-related biomarkers to adjust stimulation parameters in real-time to target symptoms more precisely. To enable these dynamic adjustments, parameters for an aDBS algorithm must be determined for each individual patient. This requires time-consuming manual tuning by clinical researchers, making it difficult to find an optimal configuration for a single patient or to scale to many patients. Furthermore, the long-term effectiveness of aDBS algorithms configured in-clinic while the patient is at home remains an open question. To implement this therapy at large scale, a methodology to automatically configure aDBS algorithm parameters while remotely monitoring therapy outcomes is needed. In this paper, we share a design for an at-home data collection platform to help the field address both issues. The platform is composed of an integrated hardware and software ecosystem that is open-source and allows for at-home collection of neural, inertial, and multi-camera video data. To ensure privacy for patient-identifiable data, the platform encrypts and transfers data through a virtual private network. The methods include time-aligning data streams and extracting pose estimates from video recordings. To demonstrate the use of this system, we deployed this platform to the home of an individual with PD and collected data during self-guided clinical tasks and periods of free behavior over the course of 1.5 years. Data were recorded at sub-therapeutic, therapeutic, and supra-therapeutic stimulation amplitudes to evaluate motor symptom severity under different therapeutic conditions. These time-aligned data show the platform is capable of synchronized at-home multi-modal data collection for therapeutic evaluation. This system architecture may be used to support automated aDBS research, to collect new datasets and to study the long-term effects of DBS therapy outside the clinic for those suffering from neurological disorders.


Asunto(s)
Estimulación Encefálica Profunda , Enfermedad de Parkinson , Humanos , Estimulación Encefálica Profunda/métodos , Ecosistema , Enfermedad de Parkinson/terapia , Recolección de Datos , Grabación en Video
8.
J Neurosci ; 43(17): 3144-3158, 2023 04 26.
Artículo en Inglés | MEDLINE | ID: mdl-36973013

RESUMEN

The meaning of words in natural language depends crucially on context. However, most neuroimaging studies of word meaning use isolated words and isolated sentences with little context. Because the brain may process natural language differently from how it processes simplified stimuli, there is a pressing need to determine whether prior results on word meaning generalize to natural language. fMRI was used to record human brain activity while four subjects (two female) read words in four conditions that vary in context: narratives, isolated sentences, blocks of semantically similar words, and isolated words. We then compared the signal-to-noise ratio (SNR) of evoked brain responses, and we used a voxelwise encoding modeling approach to compare the representation of semantic information across the four conditions. We find four consistent effects of varying context. First, stimuli with more context evoke brain responses with higher SNR across bilateral visual, temporal, parietal, and prefrontal cortices compared with stimuli with little context. Second, increasing context increases the representation of semantic information across bilateral temporal, parietal, and prefrontal cortices at the group level. In individual subjects, only natural language stimuli consistently evoke widespread representation of semantic information. Third, context affects voxel semantic tuning. Finally, models estimated using stimuli with little context do not generalize well to natural language. These results show that context has large effects on the quality of neuroimaging data and on the representation of meaning in the brain. Thus, neuroimaging studies that use stimuli with little context may not generalize well to the natural regime.SIGNIFICANCE STATEMENT Context is an important part of understanding the meaning of natural language, but most neuroimaging studies of meaning use isolated words and isolated sentences with little context. Here, we examined whether the results of neuroimaging studies that use out-of-context stimuli generalize to natural language. We find that increasing context improves the quality of neuro-imaging data and changes where and how semantic information is represented in the brain. These results suggest that findings from studies using out-of-context stimuli may not generalize to natural language used in daily life.


Asunto(s)
Comprensión , Semántica , Humanos , Femenino , Comprensión/fisiología , Encéfalo/fisiología , Lenguaje , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética/métodos
9.
Neuroimage ; 264: 119728, 2022 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-36334814

RESUMEN

Encoding models provide a powerful framework to identify the information represented in brain recordings. In this framework, a stimulus representation is expressed within a feature space and is used in a regularized linear regression to predict brain activity. To account for a potential complementarity of different feature spaces, a joint model is fit on multiple feature spaces simultaneously. To adapt regularization strength to each feature space, ridge regression is extended to banded ridge regression, which optimizes a different regularization hyperparameter per feature space. The present paper proposes a method to decompose over feature spaces the variance explained by a banded ridge regression model. It also describes how banded ridge regression performs a feature-space selection, effectively ignoring non-predictive and redundant feature spaces. This feature-space selection leads to better prediction accuracy and to better interpretability. Banded ridge regression is then mathematically linked to a number of other regression methods with similar feature-space selection mechanisms. Finally, several methods are proposed to address the computational challenge of fitting banded ridge regressions on large numbers of voxels and feature spaces. All implementations are released in an open-source Python package called Himalaya.


Asunto(s)
Análisis de Regresión , Humanos , Modelos Lineales
10.
J Neurosci ; 2022 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-35863889

RESUMEN

Object and action perception in cluttered dynamic natural scenes relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. It has been suggested that during visual search for objects, distributed semantic representation of hundreds of object categories is warped to expand the representation of targets. Yet, little is known about whether and where in the brain visual search for action categories modulates semantic representations. To address this fundamental question, we studied brain activity recorded from five subjects (1 female) via functional magnetic resonance imaging while they viewed natural movies and searched for either communication or locomotion actions. We find that attention directed to action categories elicits tuning shifts that warp semantic representations broadly across neocortex, and that these shifts interact with intrinsic selectivity of cortical voxels for target actions. These results suggest that attention serves to facilitate task performance during social interactions by dynamically shifting semantic selectivity towards target actions, and that tuning shifts are a general feature of conceptual representations in the brain.SIGNIFICANCE STATEMENTThe ability to swiftly perceive the actions and intentions of others is a crucial skill for humans, which relies on efficient allocation of limited brain resources to prioritise the attended targets over distractors. However, little is known about the nature of high-level semantic representations during natural visual search for action categories. Here we provide the first evidence showing that attention significantly warps semantic representations by inducing tuning shifts in single cortical voxels, broadly spread across occipitotemporal, parietal, prefrontal, and cingulate cortices. This dynamic attentional mechanism can facilitate action perception by efficiently allocating neural resources to accentuate the representation of task-relevant action categories.

11.
Nat Neurosci ; 24(11): 1628-1636, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34711960

RESUMEN

Semantic information in the human brain is organized into multiple networks, but the fine-grain relationships between them are poorly understood. In this study, we compared semantic maps obtained from two functional magnetic resonance imaging experiments in the same participants: one that used silent movies as stimuli and another that used narrative stories. Movies evoked activity from a network of modality-specific, semantically selective areas in visual cortex. Stories evoked activity from another network of semantically selective areas immediately anterior to visual cortex. Remarkably, the pattern of semantic selectivity in these two distinct networks corresponded along the boundary of visual cortex: for visual categories represented posterior to the boundary, the same categories were represented linguistically on the anterior side. These results suggest that these two networks are smoothly joined to form one contiguous map.


Asunto(s)
Lingüística/métodos , Reconocimiento Visual de Modelos/fisiología , Semántica , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Estimulación Luminosa/métodos , Adulto Joven
12.
Cortex ; 143: 127-147, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34411847

RESUMEN

Humans have an impressive ability to rapidly process global information in natural scenes to infer their category. Yet, it remains unclear whether and how scene categories observed dynamically in the natural world are represented in cerebral cortex beyond few canonical scene-selective areas. To address this question, here we examined the representation of dynamic visual scenes by recording whole-brain blood oxygenation level-dependent (BOLD) responses while subjects viewed natural movies. We fit voxelwise encoding models to estimate tuning for scene categories that reflect statistical ensembles of objects and actions in the natural world. We find that this scene-category model explains a significant portion of the response variance broadly across cerebral cortex. Cluster analysis of scene-category tuning profiles across cortex reveals nine spatially-segregated networks of brain regions consistently across subjects. These networks show heterogeneous tuning for a diverse set of dynamic scene categories related to navigation, human activity, social interaction, civilization, natural environment, non-human animals, motion-energy, and texture, suggesting that the organization of scene category representation is quite complex.


Asunto(s)
Corteza Cerebral , Imagen por Resonancia Magnética , Encéfalo , Mapeo Encefálico , Análisis por Conglomerados , Humanos , Reconocimiento Visual de Modelos , Estimulación Luminosa , Percepción Visual
13.
Neuroethics ; 14(3): 365-386, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33942016

RESUMEN

Advancements in novel neurotechnologies, such as brain computer interfaces (BCI) and neuromodulatory devices such as deep brain stimulators (DBS), will have profound implications for society and human rights. While these technologies are improving the diagnosis and treatment of mental and neurological diseases, they can also alter individual agency and estrange those using neurotechnologies from their sense of self, challenging basic notions of what it means to be human. As an international coalition of interdisciplinary scholars and practitioners, we examine these challenges and make recommendations to mitigate negative consequences that could arise from the unregulated development or application of novel neurotechnologies. We explore potential ethical challenges in four key areas: identity and agency, privacy, bias, and enhancement. To address them, we propose (1) democratic and inclusive summits to establish globally-coordinated ethical and societal guidelines for neurotechnology development and application, (2) new measures, including "Neurorights," for data privacy, security, and consent to empower neurotechnology users' control over their data, (3) new methods of identifying and preventing bias, and (4) the adoption of public guidelines for safe and equitable distribution of neurotechnological devices.

14.
Neuron ; 109(9): 1433-1448, 2021 05 05.
Artículo en Inglés | MEDLINE | ID: mdl-33689687

RESUMEN

Over the past few decades, neuroscience experiments have become increasingly complex and naturalistic. Experimental design has in turn become more challenging, as experiments must conform to an ever-increasing diversity of design constraints. In this article, we demonstrate how this design process can be greatly assisted using an optimization tool known as mixed-integer linear programming (MILP). MILP provides a rich framework for incorporating many types of real-world design constraints into a neuroscience experiment. We introduce the mathematical foundations of MILP, compare MILP to other experimental design techniques, and provide four case studies of how MILP can be used to solve complex experimental design challenges.


Asunto(s)
Modelos Neurológicos , Modelos Teóricos , Neurociencias/métodos , Programación Lineal , Proyectos de Investigación , Animales , Humanos
15.
Elife ; 92020 09 28.
Artículo en Inglés | MEDLINE | ID: mdl-32985972

RESUMEN

Experience influences behavior, but little is known about how experience is encoded in the brain, and how changes in neural activity are implemented at a network level to improve performance. Here we investigate how differences in experience impact brain circuitry and behavior in larval zebrafish prey capture. We find that experience of live prey compared to inert food increases capture success by boosting capture initiation. In response to live prey, animals with and without prior experience of live prey show activity in visual areas (pretectum and optic tectum) and motor areas (cerebellum and hindbrain), with similar visual area retinotopic maps of prey position. However, prey-experienced animals more readily initiate capture in response to visual area activity and have greater visually-evoked activity in two forebrain areas: the telencephalon and habenula. Consequently, disruption of habenular neurons reduces capture performance in prey-experienced fish. Together, our results suggest that experience of prey strengthens prey-associated visual drive to the forebrain, and that this lowers the threshold for prey-associated visual activity to trigger activity in motor areas, thereby improving capture performance.


Asunto(s)
Aprendizaje/fisiología , Conducta Predatoria/fisiología , Prosencéfalo/fisiología , Vías Visuales/fisiología , Pez Cebra/fisiología , Animales
16.
Front Neurosci ; 14: 565976, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-34045937

RESUMEN

Complex natural tasks likely recruit many different functional brain networks, but it is difficult to predict how such tasks will be represented across cortical areas and networks. Previous electrophysiology studies suggest that task variables are represented in a low-dimensional subspace within the activity space of neural populations. Here we develop a voxel-based state space modeling method for recovering task-related state spaces from human fMRI data. We apply this method to data acquired in a controlled visual attention task and a video game task. We find that each task induces distinct brain states that can be embedded in a low-dimensional state space that reflects task parameters, and that attention increases state separation in the task-related subspace. Our results demonstrate that the state space framework offers a powerful approach for modeling human brain activity elicited by complex natural tasks.

17.
J Neurosci ; 39(39): 7722-7736, 2019 09 25.
Artículo en Inglés | MEDLINE | ID: mdl-31427396

RESUMEN

An integral part of human language is the capacity to extract meaning from spoken and written words, but the precise relationship between brain representations of information perceived by listening versus reading is unclear. Prior neuroimaging studies have shown that semantic information in spoken language is represented in multiple regions in the human cerebral cortex, while amodal semantic information appears to be represented in a few broad brain regions. However, previous studies were too insensitive to determine whether semantic representations were shared at a fine level of detail rather than merely at a coarse scale. We used fMRI to record brain activity in two separate experiments while participants listened to or read several hours of the same narrative stories, and then created voxelwise encoding models to characterize semantic selectivity in each voxel and in each individual participant. We find that semantic tuning during listening and reading are highly correlated in most semantically selective regions of cortex, and models estimated using one modality accurately predict voxel responses in the other modality. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.SIGNIFICANCE STATEMENT Humans can comprehend the meaning of words from both spoken and written language. It is therefore important to understand the relationship between the brain representations of spoken or written text. Here, we show that although the representation of semantic information in the human brain is quite complex, the semantic representations evoked by listening versus reading are almost identical. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.


Asunto(s)
Percepción Auditiva/fisiología , Corteza Cerebral/fisiología , Comprensión/fisiología , Modelos Neurológicos , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Estimulación Luminosa , Lectura , Semántica
18.
Neuroimage ; 197: 482-492, 2019 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-31075394

RESUMEN

Predictive models for neural or fMRI data are often fit using regression methods that employ priors on the model parameters. One widely used method is ridge regression, which employs a spherical multivariate normal prior that assumes equal and independent variance for all parameters. However, a spherical prior is not always optimal or appropriate. There are many cases where expert knowledge or hypotheses about the structure of the model parameters could be used to construct a better prior. In these cases, non-spherical multivariate normal priors can be employed using a generalized form of ridge known as Tikhonov regression. Yet Tikhonov regression is only rarely used in neuroscience. In this paper we discuss the theoretical basis for Tikhonov regression, demonstrate a computationally efficient method for its application, and show several examples of how Tikhonov regression can improve predictive models for fMRI data. We also show that many earlier studies have implicitly used Tikhonov regression by linearly transforming the regressors before performing ridge regression.


Asunto(s)
Encéfalo/fisiología , Simulación por Computador , Imagen por Resonancia Magnética , Modelos Neurológicos , Neurociencias/métodos , Algoritmos , Humanos
19.
Neuron ; 101(1): 178-192.e7, 2019 01 02.
Artículo en Inglés | MEDLINE | ID: mdl-30497771

RESUMEN

It has been argued that scene-selective areas in the human brain represent both the 3D structure of the local visual environment and low-level 2D features (such as spatial frequency) that provide cues for 3D structure. To evaluate the degree to which each of these hypotheses explains variance in scene-selective areas, we develop an encoding model of 3D scene structure and test it against a model of low-level 2D features. We fit the models to fMRI data recorded while subjects viewed visual scenes. The fit models reveal that scene-selective areas represent the distance to and orientation of large surfaces, at least partly independent of low-level features. Principal component analysis of the model weights reveals that the most important dimensions of 3D structure are distance and openness. Finally, reconstructions of the stimuli based on the model weights demonstrate that our model captures unprecedented detail about the local visual environment from scene-selective areas.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA