Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 18.770
Filtrar
Más filtros

Intervalo de año de publicación
1.
Nature ; 631(8021): 610-616, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38961302

RESUMEN

From sequences of speech sounds1,2 or letters3, humans can extract rich and nuanced meaning through language. This capacity is essential for human communication. Yet, despite a growing understanding of the brain areas that support linguistic and semantic processing4-12, the derivation of linguistic meaning in neural tissue at the cellular level and over the timescale of action potentials remains largely unknown. Here we recorded from single cells in the left language-dominant prefrontal cortex as participants listened to semantically diverse sentences and naturalistic stories. By tracking their activities during natural speech processing, we discover a fine-scale cortical representation of semantic information by individual neurons. These neurons responded selectively to specific word meanings and reliably distinguished words from nonwords. Moreover, rather than responding to the words as fixed memory representations, their activities were highly dynamic, reflecting the words' meanings based on their specific sentence contexts and independent of their phonetic form. Collectively, we show how these cell ensembles accurately predicted the broad semantic categories of the words as they were heard in real time during speech and how they tracked the sentences in which they appeared. We also show how they encoded the hierarchical structure of these meaning representations and how these representations mapped onto the cell population. Together, these findings reveal a finely detailed cortical organization of semantic representations at the neuron scale in humans and begin to illuminate the cellular-level processing of meaning during language comprehension.


Asunto(s)
Comprensión , Neuronas , Corteza Prefrontal , Semántica , Análisis de la Célula Individual , Percepción del Habla , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Comprensión/fisiología , Neuronas/fisiología , Fonética , Corteza Prefrontal/fisiología , Corteza Prefrontal/citología , Percepción del Habla/fisiología , Narración
2.
Nature ; 620(7972): 172-180, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37438534

RESUMEN

Large language models (LLMs) have demonstrated impressive capabilities, but the bar for clinical applications is high. Attempts to assess the clinical knowledge of models typically rely on automated evaluations based on limited benchmarks. Here, to address these limitations, we present MultiMedQA, a benchmark combining six existing medical question answering datasets spanning professional medicine, research and consumer queries and a new dataset of medical questions searched online, HealthSearchQA. We propose a human evaluation framework for model answers along multiple axes including factuality, comprehension, reasoning, possible harm and bias. In addition, we evaluate Pathways Language Model1 (PaLM, a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM2 on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA3, MedMCQA4, PubMedQA5 and Measuring Massive Multitask Language Understanding (MMLU) clinical topics6), including 67.6% accuracy on MedQA (US Medical Licensing Exam-style questions), surpassing the prior state of the art by more than 17%. However, human evaluation reveals key gaps. To resolve this, we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, knowledge recall and reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLMs for clinical applications.


Asunto(s)
Benchmarking , Simulación por Computador , Conocimiento , Medicina , Procesamiento de Lenguaje Natural , Sesgo , Competencia Clínica , Comprensión , Conjuntos de Datos como Asunto , Concesión de Licencias , Medicina/métodos , Medicina/normas , Seguridad del Paciente , Médicos
3.
Annu Rev Neurosci ; 42: 47-65, 2019 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-30699049

RESUMEN

The modern cochlear implant (CI) is the most successful neural prosthesis developed to date. CIs provide hearing to the profoundly hearing impaired and allow the acquisition of spoken language in children born deaf. Results from studies enabled by the CI have provided new insights into (a) minimal representations at the periphery for speech reception, (b) brain mechanisms for decoding speech presented in quiet and in acoustically adverse conditions, (c) the developmental neuroscience of language and hearing, and (d) the mechanisms and time courses of intramodal and cross-modal plasticity. Additionally, the results have underscored the interconnectedness of brain functions and the importance of top-down processes in perception and learning. The findings are described in this review with emphasis on the developing brain and the acquisition of hearing and spoken language.


Asunto(s)
Percepción Auditiva/fisiología , Implantes Cocleares , Período Crítico Psicológico , Desarrollo del Lenguaje , Animales , Trastornos de la Percepción Auditiva/etiología , Encéfalo/crecimiento & desarrollo , Implantación Coclear , Comprensión , Señales (Psicología) , Sordera/congénito , Sordera/fisiopatología , Sordera/psicología , Sordera/cirugía , Diseño de Equipo , Humanos , Trastornos del Desarrollo del Lenguaje/etiología , Trastornos del Desarrollo del Lenguaje/prevención & control , Aprendizaje/fisiología , Plasticidad Neuronal , Estimulación Luminosa
4.
Proc Natl Acad Sci U S A ; 121(10): e2307876121, 2024 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-38422017

RESUMEN

During real-time language comprehension, our minds rapidly decode complex meanings from sequences of words. The difficulty of doing so is known to be related to words' contextual predictability, but what cognitive processes do these predictability effects reflect? In one view, predictability effects reflect facilitation due to anticipatory processing of words that are predictable from context. This view predicts a linear effect of predictability on processing demand. In another view, predictability effects reflect the costs of probabilistic inference over sentence interpretations. This view predicts either a logarithmic or a superlogarithmic effect of predictability on processing demand, depending on whether it assumes pressures toward a uniform distribution of information over time. The empirical record is currently mixed. Here, we revisit this question at scale: We analyze six reading datasets, estimate next-word probabilities with diverse statistical language models, and model reading times using recent advances in nonlinear regression. Results support a logarithmic effect of word predictability on processing difficulty, which favors probabilistic inference as a key component of human language processing.


Asunto(s)
Comprensión , Lenguaje , Humanos , Modelos Estadísticos
5.
Proc Natl Acad Sci U S A ; 121(23): e2311425121, 2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38814865

RESUMEN

Theories of language development-informed largely by studies of Western, middleclass infants-have highlighted the language that caregivers direct to children as a key driver of language learning. However, some have argued that language development unfolds similarly across environmental contexts, including those in which childdirected language is scarce. This raises the possibility that children are able to learn from other sources of language in their environments, particularly the language directed to others in their environment. We explore this hypothesis with infants in an indigenous Tseltal-speaking community in Southern Mexico who are rarely spoken to, yet have the opportunity to overhear a great deal of other-directed language by virtue of being carried on their mothers' backs. Adapting a previously established gaze-tracking method for detecting early word knowledge to our field setting, we find that Tseltal infants exhibit implicit knowledge of common nouns (Exp. 1), analogous to their US peers who are frequently spoken to. Moreover, they exhibit comprehension of Tseltal honorific terms that are exclusively used to greet adults in the community (Exp. 2), representing language that could only have been learned through overhearing. In so doing, Tseltal infants demonstrate an ability to discriminate words with similar meanings and perceptually similar referents at an earlier age than has been shown among Western children. Together, these results suggest that for some infants, learning from overhearing may be an important path toward developing language.


Asunto(s)
Comprensión , Desarrollo del Lenguaje , Humanos , Lactante , Femenino , Masculino , Comprensión/fisiología , México , Lenguaje , Vocabulario
6.
Proc Natl Acad Sci U S A ; 121(31): e2317653121, 2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39008690

RESUMEN

In intentional behavior, the final goal of an action is crucial in determining the entire sequence of motor acts. Neurons have been described in the inferior parietal lobule of monkeys, which besides encoding a specific motor act (e.g., grasping), have their discharge modulated by the final goal of the intended action (e.g., grasping-to-eat). Many of these "action-constrained" neurons have mirror properties responding to the observation of the motor act they encode, provided that this is embedded in a specific action. Thanks to this mechanism, the observers have an internal copy of the whole action before its execution and may, in this way, understand the agent's intention. The chained organization of motor acts has been demonstrated in schoolchildren. Here, we examined whether this organization is already present in very young children. To this purpose, we recorded EMG from the mylohyoid (MH) muscle in the children aged 3 to 6 y. The results showed that preschoolers, like older children, possess the chained organization of motor acts in execution. Interestingly, in comparison to older children, they have a delayed ability to use this mechanism to infer others' intentions by observation. Finally, we found a significant negative association between the children's age and the activation of the MH muscle during the grasp-to-eat phase in the observation condition. We, tentatively, interpreted it as a sign of an immature control of motor acts.


Asunto(s)
Intención , Humanos , Niño , Preescolar , Masculino , Femenino , Electromiografía , Comprensión/fisiología , Desempeño Psicomotor/fisiología
7.
PLoS Biol ; 21(3): e3002046, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36947552

RESUMEN

Understanding speech requires mapping fleeting and often ambiguous soundwaves to meaning. While humans are known to exploit their capacity to contextualize to facilitate this process, how internal knowledge is deployed online remains an open question. Here, we present a model that extracts multiple levels of information from continuous speech online. The model applies linguistic and nonlinguistic knowledge to speech processing, by periodically generating top-down predictions and incorporating bottom-up incoming evidence in a nested temporal hierarchy. We show that a nonlinguistic context level provides semantic predictions informed by sensory inputs, which are crucial for disambiguating among multiple meanings of the same word. The explicit knowledge hierarchy of the model enables a more holistic account of the neurophysiological responses to speech compared to using lexical predictions generated by a neural network language model (GPT-2). We also show that hierarchical predictions reduce peripheral processing via minimizing uncertainty and prediction error. With this proof-of-concept model, we demonstrate that the deployment of hierarchical predictions is a possible strategy for the brain to dynamically utilize structured knowledge and make sense of the speech input.


Asunto(s)
Comprensión , Percepción del Habla , Humanos , Comprensión/fisiología , Habla , Percepción del Habla/fisiología , Encéfalo/fisiología , Lenguaje
8.
Proc Natl Acad Sci U S A ; 120(47): e2306279120, 2023 Nov 21.
Artículo en Inglés | MEDLINE | ID: mdl-37963247

RESUMEN

Recent neurobiological models on language suggest that auditory sentence comprehension is supported by a coordinated temporal interplay within a left-dominant brain network, including the posterior inferior frontal gyrus (pIFG), posterior superior temporal gyrus and sulcus (pSTG/STS), and angular gyrus (AG). Here, we probed the timing and causal relevance of the interplay between these regions by means of concurrent transcranial magnetic stimulation and electroencephalography (TMS-EEG). Our TMS-EEG experiments reveal region- and time-specific causal evidence for a bidirectional information flow from left pSTG/STS to left pIFG and back during auditory sentence processing. Adapting a condition-and-perturb approach, our findings further suggest that the left pSTG/STS can be supported by the left AG in a state-dependent manner.


Asunto(s)
Lenguaje , Estimulación Magnética Transcraneal , Corteza Cerebral , Lóbulo Parietal , Comprensión/fisiología , Imagen por Resonancia Magnética , Mapeo Encefálico
9.
J Neurosci ; 44(12)2024 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-38267261

RESUMEN

Sentence fragments strongly predicting a specific subsequent meaningful word elicit larger preword slow waves, prediction potentials (PPs), than unpredictive contexts. To test the current predictive processing models, 128-channel EEG data were collected from both sexes to examine whether (1) different semantic PPs are elicited in language comprehension and production and (2) whether these PPs originate from the same specific "prediction area(s)" or rather from widely distributed category-specific neuronal circuits reflecting the meaning of the predicted item. Slow waves larger after predictable than unpredictable contexts were present both before subjects heard the sentence-final word in the comprehension experiment and before they pronounced the sentence-final word in the production experiment. Crucially, cortical sources underlying the semantic PP were distributed across several cortical areas and differed between the semantic categories of the expected words. In both production and comprehension, the anticipation of animal words was reflected by sources in posterior visual areas, whereas predictable tool words were preceded by sources in the frontocentral sensorimotor cortex. For both modalities, PP size increased with higher cloze probability, thus further confirming that it reflects semantic prediction, and with shorter latencies with which participants completed sentence fragments. These results sit well with theories viewing distributed semantic category-specific circuits as the mechanistic basis of semantic prediction in the two modalities.


Asunto(s)
Semántica , Corteza Sensoriomotora , Masculino , Femenino , Humanos , Comprensión/fisiología , Lenguaje , Lectura , Electroencefalografía
10.
J Neurosci ; 44(28)2024 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-38839302

RESUMEN

Temporal prediction assists language comprehension. In a series of recent behavioral studies, we have shown that listeners specifically employ rhythmic modulations of prosody to estimate the duration of upcoming sentences, thereby speeding up comprehension. In the current human magnetoencephalography (MEG) study on participants of either sex, we show that the human brain achieves this function through a mechanism termed entrainment. Through entrainment, electrophysiological brain activity maintains and continues contextual rhythms beyond their offset. Our experiment combined exposure to repetitive prosodic contours with the subsequent presentation of visual sentences that either matched or mismatched the duration of the preceding contour. During exposure to prosodic contours, we observed MEG coherence with the contours, which was source-localized to right-hemispheric auditory areas. During the processing of the visual targets, activity at the frequency of the preceding contour was still detectable in the MEG; yet sources shifted to the (left) frontal cortex, in line with a functional inheritance of the rhythmic acoustic context for prediction. Strikingly, when the target sentence was shorter than expected from the preceding contour, an omission response appeared in the evoked potential record. We conclude that prosodic entrainment is a functional mechanism of temporal prediction in language comprehension. In general, acoustic rhythms appear to endow language for employing the brain's electrophysiological mechanisms of temporal prediction.


Asunto(s)
Magnetoencefalografía , Percepción del Habla , Humanos , Masculino , Femenino , Adulto , Percepción del Habla/fisiología , Adulto Joven , Lenguaje , Comprensión/fisiología , Estimulación Acústica/métodos , Habla/fisiología , Estimulación Luminosa/métodos
11.
PLoS Biol ; 20(7): e3001713, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35834569

RESUMEN

Human language stands out in the natural world as a biological signal that uses a structured system to combine the meanings of small linguistic units (e.g., words) into larger constituents (e.g., phrases and sentences). However, the physical dynamics of speech (or sign) do not stand in a one-to-one relationship with the meanings listeners perceive. Instead, listeners infer meaning based on their knowledge of the language. The neural readouts of the perceptual and cognitive processes underlying these inferences are still poorly understood. In the present study, we used scalp electroencephalography (EEG) to compare the neural response to phrases (e.g., the red vase) and sentences (e.g., the vase is red), which were close in semantic meaning and had been synthesized to be physically indistinguishable. Differences in structure were well captured in the reorganization of neural phase responses in delta (approximately <2 Hz) and theta bands (approximately 2 to 7 Hz),and in power and power connectivity changes in the alpha band (approximately 7.5 to 13.5 Hz). Consistent with predictions from a computational model, sentences showed more power, more power connectivity, and more phase synchronization than phrases did. Theta-gamma phase-amplitude coupling occurred, but did not differ between the syntactic structures. Spectral-temporal response function (STRF) modeling revealed different encoding states for phrases and sentences, over and above the acoustically driven neural response. Our findings provide a comprehensive description of how the brain encodes and separates linguistic structures in the dynamics of neural responses. They imply that phase synchronization and strength of connectivity are readouts for the constituent structure of language. The results provide a novel basis for future neurophysiological research on linguistic structure representation in the brain, and, together with our simulations, support time-based binding as a mechanism of structure encoding in neural dynamics.


Asunto(s)
Lenguaje , Percepción del Habla , Comprensión/fisiología , Electroencefalografía/métodos , Humanos , Lingüística , Percepción del Habla/fisiología
12.
Cereb Cortex ; 34(2)2024 01 31.
Artículo en Inglés | MEDLINE | ID: mdl-38282453

RESUMEN

Using a syntactic priming task, we investigated the time course of syntactic encoding in Chinese sentence production and compared encoding patterns between younger and older adults. Participants alternately read sentence descriptions and overtly described pictures, while event-related potentials (ERPs) were recorded. We manipulated the abstract prime structure (active or passive) as well as the lexical overlap of the prime and the target (verb overlap or no overlap). The syntactic choice results replicated classical abstract priming and lexical boost effects in both younger and older adults. However, when production latency was taken into account, the speed benefit from syntactic repetition differed between the two age groups. Meanwhile, preferred priming facilitated production in both age groups, whereas nonpreferred priming inhibited production in the older group. For electroencephalography, an earlier effect of syntactic repetition and a later effect of lexical overlap showed a two-stage pattern of syntactic encoding. Older adults also showed a more delayed and interactive encoding pattern than younger adults, indicating a greater reliance on lexical information. These results are illustrative of the two-stage competition and residual activation models.


Asunto(s)
Comprensión , Percepción del Habla , Humanos , Anciano , Comprensión/fisiología , Percepción del Habla/fisiología , Lenguaje , Potenciales Evocados/fisiología , China
13.
Cereb Cortex ; 34(1)2024 01 14.
Artículo en Inglés | MEDLINE | ID: mdl-38044462

RESUMEN

A growing literature has shown that binaural beat (BB)-generated by dichotic presentation of slightly mismatched pure tones-improves cognition. We recently found that BB stimulation of either beta (18 Hz) or gamma (40 Hz) frequencies enhanced auditory sentence comprehension. Here, we used electroencephalography (EEG) to characterize neural oscillations pertaining to the enhanced linguistic operations following BB stimulation. Sixty healthy young adults were randomly assigned to one of three listening groups: 18-Hz BB, 40-Hz BB, or pure-tone baseline, all embedded in music. After listening to the sound for 10 min (stimulation phase), participants underwent an auditory sentence comprehension task involving spoken sentences that contained either an object or subject relative clause (task phase). During the stimulation phase, 18-Hz BB yielded increased EEG power in a beta frequency range, while 40-Hz BB did not. During the task phase, only the 18-Hz BB resulted in significantly higher accuracy and faster response times compared with the baseline, especially on syntactically more complex object-relative sentences. The behavioral improvement by 18-Hz BB was accompanied by attenuated beta power difference between object- and subject-relative sentences. Altogether, our findings demonstrate beta oscillations as a neural correlate of improved syntactic operation following BB stimulation.


Asunto(s)
Comprensión , Electroencefalografía , Adulto Joven , Humanos , Electroencefalografía/métodos , Lenguaje , Cognición , Tiempo de Reacción , Estimulación Acústica/métodos
14.
Cereb Cortex ; 34(8)2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39098819

RESUMEN

Acoustic, lexical, and syntactic information are simultaneously processed in the brain requiring complex strategies to distinguish their electrophysiological activity. Capitalizing on previous works that factor out acoustic information, we could concentrate on the lexical and syntactic contribution to language processing by testing competing statistical models. We exploited electroencephalographic recordings and compared different surprisal models selectively involving lexical information, part of speech, or syntactic structures in various combinations. Electroencephalographic responses were recorded in 32 participants during listening to affirmative active declarative sentences. We compared the activation corresponding to basic syntactic structures, such as noun phrases vs. verb phrases. Lexical and syntactic processing activates different frequency bands, partially different time windows, and different networks. Moreover, surprisal models based on part of speech inventory only do not explain well the electrophysiological data, while those including syntactic information do. By disentangling acoustic, lexical, and syntactic information, we demonstrated differential brain sensitivity to syntactic information. These results confirm and extend previous measures obtained with intracranial recordings, supporting our hypothesis that syntactic structures are crucial in neural language processing. This study provides a detailed understanding of how the brain processes syntactic information, highlighting the importance of syntactic surprisal in shaping neural responses during language comprehension.


Asunto(s)
Encéfalo , Electroencefalografía , Humanos , Femenino , Masculino , Electroencefalografía/métodos , Encéfalo/fisiología , Adulto , Adulto Joven , Modelos Estadísticos , Percepción del Habla/fisiología , Comprensión/fisiología , Lenguaje , Estimulación Acústica/métodos
15.
Cereb Cortex ; 34(2)2024 01 31.
Artículo en Inglés | MEDLINE | ID: mdl-38314589

RESUMEN

Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.


Asunto(s)
Comprensión , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Lenguaje , Lingüística , Neuroimagen Funcional , Mapeo Encefálico
16.
Cereb Cortex ; 34(5)2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38715408

RESUMEN

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Asunto(s)
Envejecimiento , Encéfalo , Comprensión , Ruido , Espectroscopía Infrarroja Corta , Percepción del Habla , Humanos , Adulto , Percepción del Habla/fisiología , Masculino , Femenino , Espectroscopía Infrarroja Corta/métodos , Persona de Mediana Edad , Adulto Joven , Anciano , Comprensión/fisiología , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Envejecimiento/fisiología , Mapeo Encefálico/métodos , Estimulación Acústica/métodos
17.
Cereb Cortex ; 34(3)2024 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-38501383

RESUMEN

A key question in research on the neurobiology of language is to which extent the language production and comprehension systems share neural infrastructure, but this question has not been addressed in the context of conversation. We utilized a public fMRI dataset where 24 participants engaged in unscripted conversations with a confederate outside the scanner, via an audio-video link. We provide evidence indicating that the two systems share neural infrastructure in the left-lateralized perisylvian language network, but diverge regarding the level of activation in regions within the network. Activity in the left inferior frontal gyrus was stronger in production compared to comprehension, while comprehension showed stronger recruitment of the left anterior middle temporal gyrus and superior temporal sulcus, compared to production. Although our results are reminiscent of the classical Broca-Wernicke model, the anterior (rather than posterior) temporal activation is a notable difference from that model. This is one of the findings that may be a consequence of the conversational setting, another being that conversational production activated what we interpret as higher-level socio-pragmatic processes. In conclusion, we present evidence for partial overlap and functional asymmetry of the neural infrastructure of production and comprehension, in the above-mentioned frontal vs temporal regions during conversation.


Asunto(s)
Comprensión , Imagen por Resonancia Magnética , Humanos , Comprensión/fisiología , Imagen por Resonancia Magnética/métodos , Comunicación , Lenguaje , Corteza Prefrontal , Mapeo Encefálico
18.
Cereb Cortex ; 34(4)2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38687241

RESUMEN

Speech comprehension entails the neural mapping of the acoustic speech signal onto learned linguistic units. This acousto-linguistic transformation is bi-directional, whereby higher-level linguistic processes (e.g. semantics) modulate the acoustic analysis of individual linguistic units. Here, we investigated the cortical topography and linguistic modulation of the most fundamental linguistic unit, the phoneme. We presented natural speech and "phoneme quilts" (pseudo-randomly shuffled phonemes) in either a familiar (English) or unfamiliar (Korean) language to native English speakers while recording functional magnetic resonance imaging. This allowed us to dissociate the contribution of acoustic vs. linguistic processes toward phoneme analysis. We show that (i) the acoustic analysis of phonemes is modulated by linguistic analysis and (ii) that for this modulation, both of acoustic and phonetic information need to be incorporated. These results suggest that the linguistic modulation of cortical sensitivity to phoneme classes minimizes prediction error during natural speech perception, thereby aiding speech comprehension in challenging listening situations.


Asunto(s)
Mapeo Encefálico , Imagen por Resonancia Magnética , Fonética , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Femenino , Imagen por Resonancia Magnética/métodos , Masculino , Adulto , Adulto Joven , Lingüística , Estimulación Acústica/métodos , Comprensión/fisiología , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen
19.
Cereb Cortex ; 34(2)2024 01 31.
Artículo en Inglés | MEDLINE | ID: mdl-38265297

RESUMEN

Numerous studies have been devoted to neural mechanisms of a variety of linguistic tasks (e.g. speech comprehension and production). To date, however, whether and how the neural patterns underlying different linguistic tasks are similar or differ remains elusive. In this study, we compared the neural patterns underlying 3 linguistic tasks mainly concerning speech comprehension and production. To address this, multivariate regression approaches with lesion/disconnection symptom mapping were applied to data from 216 stroke patients with damage to the left hemisphere. The results showed that lesion/disconnection patterns could predict both poststroke scores of speech comprehension and production tasks; these patterns exhibited shared regions on the temporal pole of the left hemisphere as well as unique regions contributing to the prediction for each domain. Lower scores in speech comprehension tasks were associated with lesions/abnormalities in the superior temporal gyrus and middle temporal gyrus, while lower scores in speech production tasks were associated with lesions/abnormalities in the left inferior parietal lobe and frontal lobe. These results suggested an important role of the ventral and dorsal stream pathways in speech comprehension and production (i.e. supporting the dual stream model) and highlighted the applicability of the novel multivariate disconnectome-based symptom mapping in cognitive neuroscience research.


Asunto(s)
Mapeo Encefálico , Accidente Cerebrovascular , Humanos , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética/métodos , Lingüística , Accidente Cerebrovascular/complicaciones , Accidente Cerebrovascular/diagnóstico por imagen , Comprensión
20.
Proc Natl Acad Sci U S A ; 119(43): e2122602119, 2022 10 25.
Artículo en Inglés | MEDLINE | ID: mdl-36260742

RESUMEN

A major goal of psycholinguistic theory is to account for the cognitive constraints limiting the speed and ease of language comprehension and production. Wide-ranging evidence demonstrates a key role for linguistic expectations: A word's predictability, as measured by the information-theoretic quantity of surprisal, is a major determinant of processing difficulty. But surprisal, under standard theories, fails to predict the difficulty profile of an important class of linguistic patterns: the nested hierarchical structures made possible by recursion in human language. These nested structures are better accounted for by psycholinguistic theories of constrained working memory capacity. However, progress on theory unifying expectation-based and memory-based accounts has been limited. Here we present a unified theory of a rational trade-off between precision of memory representations with ease of prediction, a scaled-up computational implementation using contemporary machine learning methods, and experimental evidence in support of the theory's distinctive predictions. We show that the theory makes nuanced and distinctive predictions for difficulty patterns in nested recursive structures predicted by neither expectation-based nor memory-based theories alone. These predictions are confirmed 1) in two language comprehension experiments in English, and 2) in sentence completions in English, Spanish, and German. More generally, our framework offers computationally explicit theory and methods for understanding how memory constraints and prediction interact in human language comprehension and production.


Asunto(s)
Comprensión , Lingüística , Humanos , Lenguaje , Psicolingüística , Memoria a Corto Plazo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA