Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 146
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-39223692

RESUMEN

Storytelling-an ancient way for humans to share individual experiences with others-has been found to induce neural alignment among listeners. In exploring the dynamic fluctuations in listener-listener (LL) coupling throughout stories, we uncover a significant correlation between LL coupling and lagged speaker-listener (lag-SL) coupling over time. Using the analogy of neural pattern (dis)similarity as distances between participants, we term this phenomenon the "herding effect." Like a shepherd guiding a group of sheep, the more closely listeners mirror the speaker's preceding brain activity patterns (higher lag-SL similarity), the more tightly they cluster together (higher LL similarity). This herding effect is particularly pronounced in brain regions where neural alignment among listeners tracks with moment-by-moment behavioral ratings of narrative content engagement. By integrating LL and SL neural coupling, this study reveals a dynamic, multi-brain functional network between the speaker and the audience, with the unfolding narrative content playing a mediating role in network configuration.

2.
Neuron ; 2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39096896

RESUMEN

Effective communication hinges on a mutual understanding of word meaning in different contexts. We recorded brain activity using electrocorticography during spontaneous, face-to-face conversations in five pairs of epilepsy patients. We developed a model-based coupling framework that aligns brain activity in both speaker and listener to a shared embedding space from a large language model (LLM). The context-sensitive LLM embeddings allow us to track the exchange of linguistic information, word by word, from one brain to another in natural conversations. Linguistic content emerges in the speaker's brain before word articulation and rapidly re-emerges in the listener's brain after word articulation. The contextual embeddings better capture word-by-word neural alignment between speaker and listener than syntactic and articulatory models. Our findings indicate that the contextual embeddings learned by LLMs can serve as an explicit numerical model of the shared, context-rich meaning space humans use to communicate their thoughts to one another.

3.
Sci Rep ; 14(1): 16782, 2024 07 22.
Artículo en Inglés | MEDLINE | ID: mdl-39039131

RESUMEN

It has been proposed that, when processing a stream of events, humans divide their experiences in terms of inferred latent causes (LCs) to support context-dependent learning. However, when shared structure is present across contexts, it is still unclear how the "splitting" of LCs and learning of shared structure can be simultaneously achieved. Here, we present the Latent Cause Network (LCNet), a neural network model of LC inference. Through learning, it naturally stores structure that is shared across tasks in the network weights. Additionally, it represents context-specific structure using a context module, controlled by a Bayesian nonparametric inference algorithm, which assigns a unique context vector for each inferred LC. Across three simulations, we found that LCNet could (1) extract shared structure across LCs in a function learning task while avoiding catastrophic interference, (2) capture human data on curriculum effects in schema learning, and (3) infer the underlying event structure when processing naturalistic videos of daily events. Overall, these results demonstrate a computationally feasible approach to reconciling shared structure and context-specific structure in a model of LCs that is scalable from laboratory experiment settings to naturalistic settings.


Asunto(s)
Algoritmos , Teorema de Bayes , Redes Neurales de la Computación , Humanos , Aprendizaje
4.
Eur J Neurosci ; 60(4): 4624-4638, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39034499

RESUMEN

Recent studies have shown that during the typical resting-state, echo planar imaging (EPI) time series obtained from the eye orbit area correlate with brain regions associated with oculomotor control and lower-level visual cortex. Here, we asked whether congenitally blind (CB) shows similar patterns, suggesting a hard-wired constraint on connectivity. We find that orbital EPI signals in CB do correlate with activity in the motor cortex, but less so with activity in the visual cortex. However, the temporal patterns of this eye movement-related signal differed strongly between CB and sighted controls. Furthermore, in CB, a few participants showed uncoordinated orbital EPI signals between the two eyes, each correlated with activity in different brain networks. Our findings suggest a retained circuitry between motor cortex and eye movements in blind, but also a moderate reorganization due to the absence of visual input, and the inability of CB to control their eye movements or sense their positions.


Asunto(s)
Ceguera , Movimientos Oculares , Humanos , Ceguera/fisiopatología , Ceguera/congénito , Movimientos Oculares/fisiología , Adulto , Femenino , Masculino , Persona de Mediana Edad , Corteza Motora/fisiopatología , Corteza Motora/diagnóstico por imagen , Corteza Visual/fisiopatología , Corteza Visual/diagnóstico por imagen , Red Nerviosa/fisiopatología , Red Nerviosa/diagnóstico por imagen , Imagen Eco-Planar/métodos , Adulto Joven , Mapeo Encefálico/métodos
5.
bioRxiv ; 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-39005394

RESUMEN

Recent research has used large language models (LLMs) to study the neural basis of naturalistic language processing in the human brain. LLMs have rapidly grown in complexity, leading to improved language processing capabilities. However, neuroscience researchers haven't kept up with the quick progress in LLM development. Here, we utilized several families of transformer-based LLMs to investigate the relationship between model size and their ability to capture linguistic information in the human brain. Crucially, a subset of LLMs were trained on a fixed training set, enabling us to dissociate model size from architecture and training set size. We used electrocorticography (ECoG) to measure neural activity in epilepsy patients while they listened to a 30-minute naturalistic audio story. We fit electrode-wise encoding models using contextual embeddings extracted from each hidden layer of the LLMs to predict word-level neural signals. In line with prior work, we found that larger LLMs better capture the structure of natural language and better predict neural activity. We also found a log-linear relationship where the encoding performance peaks in relatively earlier layers as model size increases. We also observed variations in the best-performing layer across different brain regions, corresponding to an organized language processing hierarchy.

6.
Nat Commun ; 15(1): 5523, 2024 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-38951520

RESUMEN

When processing language, the brain is thought to deploy specialized computations to construct meaning from complex linguistic structures. Recently, artificial neural networks based on the Transformer architecture have revolutionized the field of natural language processing. Transformers integrate contextual information across words via structured circuit computations. Prior work has focused on the internal representations ("embeddings") generated by these circuits. In this paper, we instead analyze the circuit computations directly: we deconstruct these computations into the functionally-specialized "transformations" that integrate contextual information across words. Using functional MRI data acquired while participants listened to naturalistic stories, we first verify that the transformations account for considerable variance in brain activity across the cortical language network. We then demonstrate that the emergent computations performed by individual, functionally-specialized "attention heads" differentially predict brain activity in specific cortical regions. These heads fall along gradients corresponding to different layers and context lengths in a low-dimensional cortical space.


Asunto(s)
Mapeo Encefálico , Encéfalo , Lenguaje , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Humanos , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Masculino , Femenino , Adulto , Adulto Joven , Modelos Neurológicos , Procesamiento de Lenguaje Natural
7.
Nat Commun ; 15(1): 3936, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38729961

RESUMEN

Conversation is a primary means of social influence, but its effects on brain activity remain unknown. Previous work on conversation and social influence has emphasized public compliance, largely setting private beliefs aside. Here, we show that consensus-building conversation aligns future brain activity within groups, with alignment persisting through novel experiences participants did not discuss. Participants watched ambiguous movie clips during fMRI scanning, then conversed in groups with the goal of coming to a consensus about each clip's narrative. After conversation, participants' brains were scanned while viewing the clips again, along with novel clips from the same movies. Groups that reached consensus showed greater similarity of brain activity after conversation. Participants perceived as having high social status spoke more and signaled disbelief in others, and their groups had unequal turn-taking and lower neural alignment. By contrast, participants with central positions in their real-world social networks encouraged others to speak, facilitating greater group neural alignment. Socially central participants were also more likely to become neurally aligned to others in their groups.


Asunto(s)
Encéfalo , Consenso , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Femenino , Masculino , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Adulto Joven , Adulto , Comunicación , Mapeo Encefálico/métodos , Adolescente
8.
Nat Commun ; 15(1): 2768, 2024 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-38553456

RESUMEN

Contextual embeddings, derived from deep language models (DLMs), provide a continuous vectorial representation of language. This embedding space differs fundamentally from the symbolic representations posited by traditional psycholinguistics. We hypothesize that language areas in the human brain, similar to DLMs, rely on a continuous embedding space to represent language. To test this hypothesis, we densely record the neural activity patterns in the inferior frontal gyrus (IFG) of three participants using dense intracranial arrays while they listened to a 30-minute podcast. From these fine-grained spatiotemporal neural recordings, we derive a continuous vectorial representation for each word (i.e., a brain embedding) in each patient. Using stringent zero-shot mapping we demonstrate that brain embeddings in the IFG and the DLM contextual embedding space have common geometric patterns. The common geometric patterns allow us to predict the brain embedding in IFG of a given left-out word based solely on its geometrical relationship to other non-overlapping words in the podcast. Furthermore, we show that contextual embeddings capture the geometry of IFG embeddings better than static word embeddings. The continuous brain embedding space exposes a vector-based neural code for natural language processing in the human brain.


Asunto(s)
Encéfalo , Lenguaje , Humanos , Corteza Prefrontal , Procesamiento de Lenguaje Natural
9.
bioRxiv ; 2024 Jan 21.
Artículo en Inglés | MEDLINE | ID: mdl-37873125

RESUMEN

Storytelling-an ancient way for humans to share individual experiences with others-has been found to induce neural synchronization among listeners. In our exploration of the dynamic fluctuations in listener-listener (LL) coupling throughout stories, we uncover a significant correlation between LL and lag-speaker-listener (lag-SL) couplings over time. Using the analogy of neural pattern (dis)similarity as distances between participants, we term this phenomenon the "herding effect": like a shepherd guiding a group of sheep, the more closely listeners follow the speaker's prior brain activity patterns (higher lag-SL similarity), the more tightly they cluster together (higher LL similarity). This herding effect is particularly pronounced in brain regions where neural synchronization among listeners tracks with behavioral ratings of narrative engagement, highlighting the mediating role of narrative content in the observed multi-brain neural coupling dynamics. By integrating LL and SL neural couplings, this study illustrates how unfolding stories shape a dynamic multi-brain functional network and how the configuration of this network may be associated with moment-by-moment efficacy of communication.

10.
Cogn Sci ; 47(10): e13343, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37867379

RESUMEN

Event segmentation theory posits that people segment continuous experience into discrete events and that event boundaries occur when there are large transient increases in prediction error. Here, we set out to test this theory in the context of story listening, by using a deep learning language model (GPT-2) to compute the predicted probability distribution of the next word, at each point in the story. For three stories, we used the probability distributions generated by GPT-2 to compute the time series of prediction error. We also asked participants to listen to these stories while marking event boundaries. We used regression models to relate the GPT-2 measures to the human segmentation data. We found that event boundaries are associated with transient increases in Bayesian surprise but not with a simpler measure of prediction error (surprisal) that tracks, for each word in the story, how strongly that word was predicted at the previous time point. These results support the hypothesis that prediction error serves as a control mechanism governing event segmentation and point to important differences between operational definitions of prediction error.


Asunto(s)
Lenguaje , Humanos , Teorema de Bayes , Probabilidad
11.
Neural Netw ; 168: 89-104, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37748394

RESUMEN

Deep Neural Networks (DNNs) have become an important tool for modeling brain and behavior. One key area of interest has been to apply these networks to model human similarity judgements. Several previous works have used the embeddings from the penultimate layer of vision DNNs and showed that a reweighting of these features improves the fit between human similarity judgments and DNNs. These studies underline the idea that these embeddings form a good basis set but lack the correct level of salience. Here we re-examined the grounds for this idea and on the contrary, we hypothesized that these embeddings, beyond forming a good basis set, also have the correct level of salience to account for similarity judgments. It is just that the huge dimensional embedding needs to be pruned to select those features relevant for the considered domain for which a similarity space is modeled. In Study 1 we supervised DNN pruning based on a subset of human similarity judgments. We found that pruning: i) improved out-of-sample prediction of human similarity judgments from DNN embeddings, ii) produced better alignment with WordNet hierarchy, and iii) retained much higher classification accuracy than reweighting. Study 2 showed that pruning by neurobiological data is highly effective in improving out-of-sample prediction of brain-derived representational dissimilarity matrices from DNN embeddings, at times fleshing out isomorphisms not otherwise observable. Using pruned DNNs, image-level heatmaps can be produced to identify image sections whose features load on dimensions coded by a brain area. Pruning supervised by human brain/behavior therefore effectively identifies alignable dimensions of knowledge between DNNs and humans and constitutes an effective method for understanding the organization of knowledge in neural networks.


Asunto(s)
Encéfalo , Redes Neurales de la Computación , Humanos
12.
bioRxiv ; 2023 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-37425747

RESUMEN

Effective communication hinges on a mutual understanding of word meaning in different contexts. The embedding space learned by large language models can serve as an explicit model of the shared, context-rich meaning space humans use to communicate their thoughts. We recorded brain activity using electrocorticography during spontaneous, face-to-face conversations in five pairs of epilepsy patients. We demonstrate that the linguistic embedding space can capture the linguistic content of word-by-word neural alignment between speaker and listener. Linguistic content emerged in the speaker's brain before word articulation, and the same linguistic content rapidly reemerged in the listener's brain after word articulation. These findings establish a computational framework to study how human brains transmit their thoughts to one another in real-world contexts.

13.
Cereb Cortex ; 33(12): 7830-7842, 2023 06 08.
Artículo en Inglés | MEDLINE | ID: mdl-36939309

RESUMEN

Word embedding representations have been shown to be effective in predicting human neural responses to lingual stimuli. While these representations are sensitive to the textual context, they lack the extratextual sources of context such as prior knowledge, thoughts, and beliefs, all of which constitute the listener's perspective. In this study, we propose conceptualizing the listeners' perspective as a source that induces changes in the embedding space. We relied on functional magnetic resonance imaging data collected by Yeshurun Y, Swanson S, Simony E, Chen J, Lazaridi C, Honey CJ, Hasson U. Same story, different story: the neural representation of interpretive frameworks. Psychol Sci. 2017:28(3):307-319, in which two groups of human listeners (n = 40) were listening to the same story but with different perspectives. Using a dedicated fine-tuning process, we created two modified versions of a word embedding space, corresponding to the two groups of listeners. We found that each transformed space was better fitted with neural responses of the corresponding group, and that the spatial distances between these spaces reflect both interpretational differences between the perspectives and the group-level neural differences. Together, our results demonstrate how aligning a continuous embedding space to a specific context can provide a novel way of modeling listeners' intrinsic perspectives.


Asunto(s)
Percepción del Habla , Humanos , Percepción del Habla/fisiología , Percepción Auditiva
14.
Psychol Sci ; 34(3): 326-344, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36595492

RESUMEN

When recalling memories, we often scan information-rich continuous episodes, for example, to find our keys. How does our brain access and search through those memories? We suggest that high-level structure, marked by event boundaries, guides us through this process: In our computational model, memory scanning is sped up by skipping ahead to the next event boundary upon reaching a decision threshold. In adult Mechanical Turk workers from the United States, we used a movie (normed for event boundaries; Study 1, N = 203) to prompt memory scanning of movie segments for answers (Study 2, N = 298) and mental simulation (Study 3, N = 100) of these segments. Confirming model predictions, we found that memory-scanning times varied as a function of the number of event boundaries within a segment and the distance of the search target to the previous boundary (the key diagnostic parameter). Mental simulation times were also described by a skipping process with a higher skipping threshold than memory scanning. These findings identify event boundaries as access points to memory.


Asunto(s)
Memoria Episódica , Adulto , Humanos , Recuerdo Mental , Encéfalo
15.
Data Brief ; 46: 108788, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36506797

RESUMEN

Whole-brain functional magnetic resonance imaging (fMRI) data from twenty healthy human participants were collected during naturalistic movie watching and free spoken recall tasks. Participants watched ten short (approximately 2 - 8 min) audiovisual movies and then verbally described what they remembered about the movies in their own words. Participants' verbal responses were audio recorded using an MR-compatible microphone. The audio recordings were transcribed and timestamped by independent coders. The neural and behavioral data were organized in the Brain Imaging Data Structure (BIDS) format and made publicly available via OpenNeuro.org. The dataset can be used to explore the neural bases of naturalistic memory and other cognitive functions including but not limited to visual/auditory perception, language comprehension, and speech generation.

16.
Brain Behav ; 13(2): e2869, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36579557

RESUMEN

INTRODUCTION: Few of us are skilled lipreaders while most struggle with the task. Neural substrates that enable comprehension of connected natural speech via lipreading are not yet well understood. METHODS: We used a data-driven approach to identify brain areas underlying the lipreading of an 8-min narrative with participants whose lipreading skills varied extensively (range 6-100%, mean = 50.7%). The participants also listened to and read the same narrative. The similarity between individual participants' brain activity during the whole narrative, within and between conditions, was estimated by a voxel-wise comparison of the Blood Oxygenation Level Dependent (BOLD) signal time courses. RESULTS: Inter-subject correlation (ISC) of the time courses revealed that lipreading, listening to, and reading the narrative were largely supported by the same brain areas in the temporal, parietal and frontal cortices, precuneus, and cerebellum. Additionally, listening to and reading connected naturalistic speech particularly activated higher-level linguistic processing in the parietal and frontal cortices more consistently than lipreading, probably paralleling the limited understanding obtained via lip-reading. Importantly, higher lipreading test score and subjective estimate of comprehension of the lipread narrative was associated with activity in the superior and middle temporal cortex. CONCLUSIONS: Our new data illustrates that findings from prior studies using well-controlled repetitive speech stimuli and stimulus-driven data analyses are also valid for naturalistic connected speech. Our results might suggest an efficient use of brain areas dealing with phonological processing in skilled lipreaders.


Asunto(s)
Lectura de los Labios , Percepción del Habla , Humanos , Femenino , Encéfalo , Percepción Auditiva , Cognición , Imagen por Resonancia Magnética
17.
Proc Natl Acad Sci U S A ; 119(51): e2209307119, 2022 12 20.
Artículo en Inglés | MEDLINE | ID: mdl-36508677

RESUMEN

When listening to spoken narratives, we must integrate information over multiple, concurrent timescales, building up from words to sentences to paragraphs to a coherent narrative. Recent evidence suggests that the brain relies on a chain of hierarchically organized areas with increasing temporal receptive windows to process naturalistic narratives. We hypothesized that the structure of this cortical processing hierarchy should result in an observable sequence of response lags between networks comprising the hierarchy during narrative comprehension. This study uses functional MRI to estimate the response lags between functional networks during narrative comprehension. We use intersubject cross-correlation analysis to capture network connectivity driven by the shared stimulus. We found a fixed temporal sequence of response lags-on the scale of several seconds-starting in early auditory areas, followed by language areas, the attention network, and lastly the default mode network. This gradient is consistent across eight distinct stories but absent in data acquired during rest or using a scrambled story stimulus, supporting our hypothesis that narrative construction gives rise to internetwork lags. Finally, we build a simple computational model for the neural dynamics underlying the construction of nested narrative features. Our simulations illustrate how the gradual accumulation of information within the boundaries of nested linguistic events, accompanied by increased activity at each level of the processing hierarchy, can give rise to the observed lag gradient.


Asunto(s)
Mapeo Encefálico , Percepción del Habla , Percepción del Habla/fisiología , Comprensión/fisiología , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Imagen por Resonancia Magnética
18.
Elife ; 112022 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-36519530

RESUMEN

The brain actively reshapes our understanding of past events in light of new incoming information. In the current study, we ask how the brain supports this updating process during the encoding and recall of naturalistic stimuli. One group of participants watched a movie ('The Sixth Sense') with a cinematic 'twist' at the end that dramatically changed the interpretation of previous events. Next, participants were asked to verbally recall the movie events, taking into account the new 'twist' information. Most participants updated their recall to incorporate the twist. Two additional groups recalled the movie without having to update their memories during recall: one group never saw the twist; another group was exposed to the twist prior to the beginning of the movie, and thus the twist information was incorporated both during encoding and recall. We found that providing participants with information about the twist beforehand altered neural response patterns during movie-viewing in the default mode network (DMN). Moreover, presenting participants with the twist at the end of the movie changed the neural representation of the previously-encoded information during recall in a subset of DMN regions. Further evidence for this transformation was obtained by comparing the neural activation patterns during encoding and recall and correlating them with behavioral signatures of memory updating. Our results demonstrate that neural representations of past events encoded in the DMN are dynamically integrated with new information that reshapes our understanding in natural contexts.


Asunto(s)
Mapeo Encefálico , Memoria Episódica , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/fisiología , Recuerdo Mental/fisiología
19.
Neuroimage Rep ; 2(3)2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36081469

RESUMEN

We explored the potential of using real-time fMRI (rt-fMRI) neurofeedback training to bias interpretations of naturalistic narrative stimuli. Participants were randomly assigned to one of two possible conditions, each corresponding to a different interpretation of an ambiguous spoken story. While participants listened to the story in the scanner, neurofeedback was used to reward neural activity corresponding to the assigned interpretation. After scanning, final interpretations were assessed. While neurofeedback did not change story interpretations on average, participants with higher levels of decoding accuracy during the neurofeedback procedure were more likely to adopt the assigned interpretation; additional control conditions are needed to establish the role of individualized feedback in driving this result. While naturalistic stimuli introduce a unique set of challenges in providing effective and individualized neurofeedback, we believe that this technique holds promise for individualized cognitive therapy.

20.
Nat Neurosci ; 25(3): 369-380, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35260860

RESUMEN

Departing from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECoG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language.


Asunto(s)
Lenguaje , Lingüística , Encéfalo/fisiología , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA