RESUMO
Speech perception is thought to rely on a cortical feedforward serial transformation of acoustic into linguistic representations. Using intracranial recordings across the entire human auditory cortex, electrocortical stimulation, and surgical ablation, we show that cortical processing across areas is not consistent with a serial hierarchical organization. Instead, response latency and receptive field analyses demonstrate parallel and distinct information processing in the primary and nonprimary auditory cortices. This functional dissociation was also observed where stimulation of the primary auditory cortex evokes auditory hallucination but does not distort or interfere with speech perception. Opposite effects were observed during stimulation of nonprimary cortex in superior temporal gyrus. Ablation of the primary auditory cortex does not affect speech perception. These results establish a distributed functional organization of parallel information processing throughout the human auditory cortex and demonstrate an essential independent role for nonprimary auditory cortex in speech processing.
Assuntos
Córtex Auditivo/fisiologia , Fala/fisiologia , Audiometria de Tons Puros , Eletrodos , Processamento Eletrônico de Dados , Humanos , Fonética , Percepção da Altura Sonora , Tempo de Reação/fisiologia , Lobo Temporal/fisiologiaRESUMO
Understanding the neural basis of speech perception requires that we study the human brain both at the scale of the fundamental computational unit of neurons and in their organization across the depth of cortex. Here we used high-density Neuropixels arrays1-3 to record from 685 neurons across cortical layers at nine sites in a high-level auditory region that is critical for speech, the superior temporal gyrus4,5, while participants listened to spoken sentences. Single neurons encoded a wide range of speech sound cues, including features of consonants and vowels, relative vocal pitch, onsets, amplitude envelope and sequence statistics. Neurons at each cross-laminar recording exhibited dominant tuning to a primary speech feature while also containing a substantial proportion of neurons that encoded other features contributing to heterogeneous selectivity. Spatially, neurons at similar cortical depths tended to encode similar speech features. Activity across all cortical layers was predictive of high-frequency field potentials (electrocorticography), providing a neuronal origin for macroelectrode recordings from the cortical surface. Together, these results establish single-neuron tuning across the cortical laminae as an important dimension of speech encoding in human superior temporal gyrus.
Assuntos
Córtex Auditivo , Neurônios , Percepção da Fala , Lobo Temporal , Humanos , Estimulação Acústica , Córtex Auditivo/citologia , Córtex Auditivo/fisiologia , Neurônios/fisiologia , Fonética , Fala , Percepção da Fala/fisiologia , Lobo Temporal/citologia , Lobo Temporal/fisiologia , Sinais (Psicologia) , EletrodosRESUMO
Humans are capable of generating extraordinarily diverse articulatory movement combinations to produce meaningful speech. This ability to orchestrate specific phonetic sequences, and their syllabification and inflection over subsecond timescales allows us to produce thousands of word sounds and is a core component of language1,2. The fundamental cellular units and constructs by which we plan and produce words during speech, however, remain largely unknown. Here, using acute ultrahigh-density Neuropixels recordings capable of sampling across the cortical column in humans, we discover neurons in the language-dominant prefrontal cortex that encoded detailed information about the phonetic arrangement and composition of planned words during the production of natural speech. These neurons represented the specific order and structure of articulatory events before utterance and reflected the segmentation of phonetic sequences into distinct syllables. They also accurately predicted the phonetic, syllabic and morphological components of upcoming words and showed a temporally ordered dynamic. Collectively, we show how these mixtures of cells are broadly organized along the cortical column and how their activity patterns transition from articulation planning to production. We also demonstrate how these cells reliably track the detailed composition of consonant and vowel sounds during perception and how they distinguish processes specifically related to speaking from those related to listening. Together, these findings reveal a remarkably structured organization and encoding cascade of phonetic representations by prefrontal neurons in humans and demonstrate a cellular process that can support the production of speech.
Assuntos
Neurônios , Fonética , Córtex Pré-Frontal , Fala , Humanos , Movimento , Neurônios/fisiologia , Fala/fisiologia , Percepção da Fala/fisiologia , Córtex Pré-Frontal/citologia , Córtex Pré-Frontal/fisiologiaRESUMO
From sequences of speech sounds1,2 or letters3, humans can extract rich and nuanced meaning through language. This capacity is essential for human communication. Yet, despite a growing understanding of the brain areas that support linguistic and semantic processing4-12, the derivation of linguistic meaning in neural tissue at the cellular level and over the timescale of action potentials remains largely unknown. Here we recorded from single cells in the left language-dominant prefrontal cortex as participants listened to semantically diverse sentences and naturalistic stories. By tracking their activities during natural speech processing, we discover a fine-scale cortical representation of semantic information by individual neurons. These neurons responded selectively to specific word meanings and reliably distinguished words from nonwords. Moreover, rather than responding to the words as fixed memory representations, their activities were highly dynamic, reflecting the words' meanings based on their specific sentence contexts and independent of their phonetic form. Collectively, we show how these cell ensembles accurately predicted the broad semantic categories of the words as they were heard in real time during speech and how they tracked the sentences in which they appeared. We also show how they encoded the hierarchical structure of these meaning representations and how these representations mapped onto the cell population. Together, these findings reveal a finely detailed cortical organization of semantic representations at the neuron scale in humans and begin to illuminate the cellular-level processing of meaning during language comprehension.
Assuntos
Compreensão , Neurônios , Córtex Pré-Frontal , Semântica , Análise de Célula Única , Percepção da Fala , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Compreensão/fisiologia , Neurônios/fisiologia , Fonética , Córtex Pré-Frontal/fisiologia , Córtex Pré-Frontal/citologia , Percepção da Fala/fisiologia , NarraçãoRESUMO
Languages disfavor word forms containing sequences of similar or identical consonants, due to the biomechanical and cognitive difficulties posed by patterns of this sort. However, the specific evolutionary processes responsible for this phenomenon are not fully understood. Words containing sequences of identical consonants may be more likely to arise than those without; processes of word form mutation may be more likely to remove than create sequences of identical consonants in word forms; finally, words containing identical consonants may die out more frequently than those without. Phylogenetic analyses of the evolution of homologous word forms indicate that words with identical consonants arise less frequently than those without. However, words with identical consonants do not die out more frequently than those without. Further analyses reveal that forms with identical consonants are replaced in basic meaning functions more frequently than words without. Taken together, results suggest that the underrepresentation of sequences of identical consonants is overwhelmingly a by-product of constraints on word form coinage, though processes related to word usage also serve to ensure that such patterns are infrequent in more salient vocabulary items. These findings clarify aspects of processes of lexical evolution and competition that take place during language change, optimizing communicative systems.
Assuntos
Idioma , Filogenia , Humanos , Evolução Biológica , Fonética , VocabulárioRESUMO
Humans can easily tune in to one talker in a multitalker environment while still picking up bits of background speech; however, it remains unclear how we perceive speech that is masked and to what degree non-target speech is processed. Some models suggest that perception can be achieved through glimpses, which are spectrotemporal regions where a talker has more energy than the background. Other models, however, require the recovery of the masked regions. To clarify this issue, we directly recorded from primary and non-primary auditory cortex (AC) in neurosurgical patients as they attended to one talker in multitalker speech and trained temporal response function models to predict high-gamma neural activity from glimpsed and masked stimulus features. We found that glimpsed speech is encoded at the level of phonetic features for target and non-target talkers, with enhanced encoding of target speech in non-primary AC. In contrast, encoding of masked phonetic features was found only for the target, with a greater response latency and distinct anatomical organization compared to glimpsed phonetic features. These findings suggest separate mechanisms for encoding glimpsed and masked speech and provide neural evidence for the glimpsing model of speech perception.
Assuntos
Percepção da Fala , Fala , Humanos , Fala/fisiologia , Estimulação Acústica , Fonética , Percepção da Fala/fisiologia , Tempo de ReaçãoRESUMO
Humans may retrieve words from memory by exploring and exploiting in "semantic space" similar to how nonhuman animals forage for resources in physical space. This has been studied using the verbal fluency test (VFT), in which participants generate words belonging to a semantic or phonetic category in a limited time. People produce bursts of related items during VFT, referred to as "clustering" and "switching." The strategic foraging model posits that cognitive search behavior is guided by a monitoring process which detects relevant declines in performance and then triggers the searcher to seek a new patch or cluster in memory after the current patch has been depleted. An alternative body of research proposes that this behavior can be explained by an undirected rather than strategic search process, such as random walks with or without random jumps to new parts of semantic space. This study contributes to this theoretical debate by testing for neural evidence of strategically timed switches during memory search. Thirty participants performed category and letter VFT during functional MRI. Responses were classified as cluster or switch events based on computational metrics of similarity and participant evaluations. Results showed greater hippocampal and posterior cerebellar activation during switching than clustering, even while controlling for interresponse times and linguistic distance. Furthermore, these regions exhibited ramping activity which increased during within-patch search leading up to switches. Findings support the strategic foraging model, clarifying how neural switch processes may guide memory search in a manner akin to foraging in patchy spatial environments.
Assuntos
Fonética , Semântica , Animais , Humanos , Comportamento Verbal/fisiologia , Testes NeuropsicológicosRESUMO
Learning to process speech in a foreign language involves learning new representations for mapping the auditory signal to linguistic structure. Behavioral experiments suggest that even listeners that are highly proficient in a non-native language experience interference from representations of their native language. However, much of the evidence for such interference comes from tasks that may inadvertently increase the salience of native language competitors. Here we tested for neural evidence of proficiency and native language interference in a naturalistic story listening task. We studied electroencephalography responses of 39 native speakers of Dutch (14 male) to an English short story, spoken by a native speaker of either American English or Dutch. We modeled brain responses with multivariate temporal response functions, using acoustic and language models. We found evidence for activation of Dutch language statistics when listening to English, but only when it was spoken with a Dutch accent. This suggests that a naturalistic, monolingual setting decreases the interference from native language representations, whereas an accent in the listener's own native language may increase native language interference, by increasing the salience of the native language and activating native language phonetic and lexical representations. Brain responses suggest that such interference stems from words from the native language competing with the foreign language in a single word recognition system, rather than being activated in a parallel lexicon. We further found that secondary acoustic representations of speech (after 200â ms latency) decreased with increasing proficiency. This may reflect improved acoustic-phonetic models in more proficient listeners.Significance Statement Behavioral experiments suggest that native language knowledge interferes with foreign language listening, but such effects may be sensitive to task manipulations, as tasks that increase metalinguistic awareness may also increase native language interference. This highlights the need for studying non-native speech processing using naturalistic tasks. We measured neural responses unobtrusively while participants listened for comprehension and characterized the influence of proficiency at multiple levels of representation. We found that salience of the native language, as manipulated through speaker accent, affected activation of native language representations: significant evidence for activation of native language (Dutch) categories was only obtained when the speaker had a Dutch accent, whereas no significant interference was found to a speaker with a native (American) accent.
Assuntos
Percepção da Fala , Fala , Masculino , Humanos , Idioma , Fonética , Aprendizagem , Encéfalo , Percepção da Fala/fisiologiaRESUMO
Auditory feedback of one's own speech is used to monitor and adaptively control fluent speech production. A new study in PLOS Biology using electrocorticography (ECoG) in listeners whose speech was artificially delayed identifies regions involved in monitoring speech production.
Assuntos
Percepção da Fala , Fala , Encéfalo , Mapeamento Encefálico , Humanos , FonéticaRESUMO
The ability to map speech sounds to corresponding letters is critical for establishing proficient reading. People vary in this phonological processing ability, which has been hypothesized to result from variation in hemispheric asymmetries within brain regions that support language. A cerebral lateralization hypothesis predicts that more asymmetric brain structures facilitate the development of foundational reading skills like phonological processing. That is, structural asymmetries are predicted to linearly increase with ability. In contrast, a canalization hypothesis predicts that asymmetries constrain behavioral performance within a normal range. That is, structural asymmetries are predicted to quadratically relate to phonological processing, with average phonological processing occurring in people with the most asymmetric structures. These predictions were examined in relatively large samples of children (N = 424) and adults (N = 300), using a topological asymmetry analysis of T1-weighted brain images and a decoding measure of phonological processing. There was limited evidence of structural asymmetry and phonological decoding associations in classic language-related brain regions. However, and in modest support of the cerebral lateralization hypothesis, small to medium effect sizes were observed where phonological decoding accuracy increased with the magnitude of the largest structural asymmetry across left hemisphere cortical regions, but not right hemisphere cortical regions, for both the adult and pediatric samples. In support of the canalization hypothesis, small to medium effect sizes were observed where phonological decoding in the normal range was associated with increased asymmetries in specific cortical regions for both the adult and pediatric samples, which included performance monitoring and motor planning brain regions that contribute to oral and written language functions. Thus, the relevance of each hypothesis to phonological decoding may depend on the scale of brain organization.
Assuntos
Idioma , Fonética , Adulto , Encéfalo , Mapeamento Encefálico , Córtex Cerebral , Criança , Lateralidade Funcional , Humanos , Imageamento por Ressonância Magnética , LeituraRESUMO
Progressive apraxia of speech (PAOS) is a neurodegenerative motor-speech disorder that most commonly arises from a four-repeat tauopathy. Recent studies have established that progressive apraxia of speech is not a homogenous disease but rather there are distinct subtypes: the phonetic subtype is characterized by distorted sound substitutions, the prosodic subtype by slow and segmented speech and the mixed subtype by a combination of both but lack of predominance of either. There is some evidence that cross-sectional patterns of neurodegeneration differ across subtypes, although it is unknown whether longitudinal patterns of neurodegeneration differ. We examined longitudinal patterns of atrophy on MRI, hypometabolism on 18F-fluorodeoxyglucose-PET and tau uptake on flortaucipir-PET in a large cohort of subjects with PAOS that had been followed for many years. Ninety-one subjects with PAOS (51 phonetic, 40 prosodic) were recruited by the Neurodegenerative Research Group. Of these, 54 (27 phonetic, 27 prosodic) returned for annual follow-up, with up to seven longitudinal visits (total visits analysed = 217). Volumes, metabolism and flortaucipir uptake were measured for subcortical and cortical regions, for all scans. Bayesian hierarchical models were used to model longitudinal change across imaging modalities with PAOS subtypes being compared at baseline, 4 years from baseline, and in terms of rates of change. The phonetic group showed smaller volumes and worse metabolism in Broca's area and the striatum at baseline and after 4 years, and faster rates of change in these regions, compared with the prosodic group. There was also evidence of faster spread of hypometabolism and flortaucipir uptake into the temporal and parietal lobes in the phonetic group. In contrast, the prosodic group showed smaller cerebellar dentate, midbrain, substantia nigra and thalamus volumes at baseline and after 4 years, as well as faster rates of atrophy, than the phonetic group. Greater hypometabolism and flortaucipir uptake were also observed in the cerebellar dentate and substantia nigra in the prosodic group. Mixed findings were observed in the supplementary motor area and precentral cortex, with no clear differences observed across phonetic and prosodic groups. These findings support different patterns of disease spread in PAOS subtypes, with corticostriatal patterns in the phonetic subtype and brainstem and thalamic patterns in the prosodic subtype, providing insight into the pathophysiology and heterogeneity of PAOS.
Assuntos
Apraxias , Carbolinas , Tomografia por Emissão de Pósitrons , Humanos , Masculino , Feminino , Idoso , Apraxias/diagnóstico por imagem , Apraxias/metabolismo , Tomografia por Emissão de Pósitrons/métodos , Pessoa de Meia-Idade , Estudos Longitudinais , Imageamento por Ressonância Magnética , Encéfalo/metabolismo , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Atrofia/patologia , Fluordesoxiglucose F18 , Fonética , Idoso de 80 Anos ou mais , Proteínas tau/metabolismoRESUMO
The brain networks for the first (L1) and second (L2) languages are dynamically formed in the bilingual brain. This study delves into the neural mechanisms associated with logographic-logographic bilingualism, where both languages employ visually complex and conceptually rich logographic scripts. Using functional Magnetic Resonance Imaging, we examined the brain activity of Chinese-Japanese bilinguals and Japanese-Chinese bilinguals as they engaged in rhyming tasks with Chinese characters and Japanese Kanji. Results showed that Japanese-Chinese bilinguals processed both languages using common brain areas, demonstrating an assimilation pattern, whereas Chinese-Japanese bilinguals recruited additional neural regions in the left lateral prefrontal cortex for processing Japanese Kanji, reflecting their accommodation to the higher phonological complexity of L2. In addition, Japanese speakers relied more on the phonological processing route, while Chinese speakers favored visual form analysis for both languages, indicating differing neural strategy preferences between the 2 bilingual groups. Moreover, multivariate pattern analysis demonstrated that, despite the considerable neural overlap, each bilingual group formed distinguishable neural representations for each language. These findings highlight the brain's capacity for neural adaptability and specificity when processing complex logographic languages, enriching our understanding of the neural underpinnings supporting bilingual language processing.
Assuntos
Mapeamento Encefálico , Encéfalo , Imageamento por Ressonância Magnética , Multilinguismo , Humanos , Masculino , Feminino , Adulto Jovem , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Adulto , Fonética , Leitura , Idioma , JapãoRESUMO
Speech comprehension entails the neural mapping of the acoustic speech signal onto learned linguistic units. This acousto-linguistic transformation is bi-directional, whereby higher-level linguistic processes (e.g. semantics) modulate the acoustic analysis of individual linguistic units. Here, we investigated the cortical topography and linguistic modulation of the most fundamental linguistic unit, the phoneme. We presented natural speech and "phoneme quilts" (pseudo-randomly shuffled phonemes) in either a familiar (English) or unfamiliar (Korean) language to native English speakers while recording functional magnetic resonance imaging. This allowed us to dissociate the contribution of acoustic vs. linguistic processes toward phoneme analysis. We show that (i) the acoustic analysis of phonemes is modulated by linguistic analysis and (ii) that for this modulation, both of acoustic and phonetic information need to be incorporated. These results suggest that the linguistic modulation of cortical sensitivity to phoneme classes minimizes prediction error during natural speech perception, thereby aiding speech comprehension in challenging listening situations.
Assuntos
Mapeamento Encefálico , Imageamento por Ressonância Magnética , Fonética , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Feminino , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto , Adulto Jovem , Linguística , Estimulação Acústica/métodos , Compreensão/fisiologia , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagemRESUMO
Evidence suggests that the articulatory motor system contributes to speech perception in a context-dependent manner. This study tested 2 hypotheses using magnetoencephalography: (i) the motor cortex is involved in phonological processing, and (ii) it aids in compensating for speech-in-noise challenges. A total of 32 young adults performed a phonological discrimination task under 3 noise conditions while their brain activity was recorded using magnetoencephalography. We observed simultaneous activation in the left ventral primary motor cortex and bilateral posterior-superior temporal gyrus when participants correctly identified pairs of syllables. This activation was significantly more pronounced for phonologically different than identical syllable pairs. Notably, phonological differences were resolved more quickly in the left ventral primary motor cortex than in the left posterior-superior temporal gyrus. Conversely, the noise level did not modulate the activity in frontal motor regions and the involvement of the left ventral primary motor cortex in phonological discrimination was comparable across all noise conditions. Our results show that the ventral primary motor cortex is crucial for phonological processing but not for compensation in challenging listening conditions. Simultaneous activation of left ventral primary motor cortex and bilateral posterior-superior temporal gyrus supports an interactive model of speech perception, where auditory and motor regions shape perception. The ventral primary motor cortex may be involved in a predictive coding mechanism that influences auditory-phonetic processing.
Assuntos
Magnetoencefalografia , Córtex Motor , Fonética , Percepção da Fala , Humanos , Masculino , Feminino , Córtex Motor/fisiologia , Adulto Jovem , Percepção da Fala/fisiologia , Adulto , Lateralidade Funcional/fisiologia , Discriminação Psicológica/fisiologia , Estimulação Acústica , Mapeamento Encefálico , RuídoRESUMO
Which sounds composed the first spoken languages? Archetypal sounds are not phylogenetically or archeologically recoverable, but comparative linguistics and primatology provide an alternative approach. Labial articulations are the most common speech sound, being virtually universal across the world's languages. Of all labials, the plosive 'p' sound, as in 'Pablo Picasso', transcribed /p/, is the most predominant voiceless sound globally and one of the first sounds to emerge in human infant canonical babbling. Global omnipresence and ontogenetic precocity imply that /p/-like sounds could predate the first major linguistic diversification event(s) in humans. Indeed, great ape vocal data support this view, namely, the only cultural sound shared across all great ape genera is articulatorily homologous to a rolling or trilled /p/, the 'raspberry'. /p/-like labial sounds represent an 'articulatory attractor' among living hominids and are likely among the oldest phonological features to have ever emerged in linguistic systems.
Assuntos
Hominidae , Fala , Animais , Humanos , Idioma , FonéticaRESUMO
The automatic activation of letter-speech sound (L-SS) associations is a vital step in typical reading acquisition. However, the contribution of L-SS integration during nonalphabetic native and alphabetic second language (L2) reading remains unclear. This study explored whether L-SS integration plays a similar role in a nonalphabetic language as in alphabetic languages and its contribution to L2 reading among native Japanese-speaking adults with varying English proficiency. A priming paradigm in Japanese and English was performed by presenting visual letters or symbols, followed by auditory sounds. We compared behavioral and event-related responses elicited by congruent letter-sound pairs, incongruent pairs, and baseline condition (symbol-sound pairs). The behavioral experiment revealed shorter RTs in the congruent condition for Japanese and English tasks, suggesting a facilitation effect of congruency. The ERP experiment results showed an increased early N1 response to Japanese congruent pairs compared to corresponding incongruent stimuli at the left frontotemporal electrodes. Interestingly, advanced English learners exhibited greater activities in bilateral but predominantly right-lateralized frontotemporal regions for the congruent condition within the N1 time window. Moreover, the enhancement of P2 response to congruent pairs was observed in intermediate English learners. These findings indicate that, despite deviations from native language processing, advanced speakers may successfully integrate letters and sounds during English reading, whereas intermediate learners may encounter difficulty in achieving L-SS integration when reading L2. Furthermore, our results suggest that L2 proficiency may affect the level of automaticity in L-SS integration, with the right P2 congruency effect playing a compensatory role for intermediate learners.
Assuntos
Eletroencefalografia , Potenciais Evocados , Multilinguismo , Leitura , Humanos , Masculino , Feminino , Adulto Jovem , Adulto , Potenciais Evocados/fisiologia , Tempo de Reação/fisiologia , Estimulação Acústica , Aprendizagem/fisiologia , Percepção da Fala/fisiologia , Fonética , Reconhecimento Visual de Modelos/fisiologia , Japão , Estimulação Luminosa , População do Leste AsiáticoRESUMO
BACKGROUND: Word-finding difficulty is prevalent but poorly understood in persons with relapsing-remitting multiple sclerosis (RRMS). OBJECTIVE: The objective was to investigate our hypothesis that phonological processing ability is below expectations and related to word-finding difficulty in patients with RRMS. METHOD: Data were analyzed from patients with RRMS (n = 50) on patient-reported word-finding difficulty (PR-WFD) and objective performance on Wechsler Individual Achievement Test, Fourth Edition (WIAT-4) Phonemic Proficiency (PP; analysis of phonemes within words), Word Reading (WR; proxy of premorbid literacy and verbal ability), and Sentence Repetition (SR; auditory processing of word-level information). RESULTS: Performance (mean (95% confidence interval)) was reliably lower than normative expectations for PP (-0.41 (-0.69, -0.13)) but not for WR (0.02 (-0.21, 0.25)) or SR (0.08 (-0.15, 0.31). Within-subjects performance was worse on PP than on both WR (t(49) = 4.00, p < 0.001, d = 0.47) and SR (t(49) =3.76, p < 0.001, d = 0.54). Worse PR-WFD was specifically related to lower PP (F2,47 = 6.24, p = 0.004, η2 = 0.21); worse PP performance at PR-WFD Often (n = 13; -1.16 (-1.49, -0.83)) than Sometimes (n = 17; -0.14 (-0.68, 0.41)) or Rarely (n = 20; -0.16 (-0.58, 0.27). PR-WFD was unrelated to WR or SR (ps > 0.25). CONCLUSION: Phonological processing was below expectations and specifically linked to word-finding difficulty in RRMS. Findings are consistent with early disease-related cortical changes within the posterior superior temporal/supramarginal region. Results inform our developing model of multiple sclerosis-related word-finding difficulty.
Assuntos
Esclerose Múltipla Recidivante-Remitente , Humanos , Feminino , Masculino , Adulto , Pessoa de Meia-Idade , Esclerose Múltipla Recidivante-Remitente/fisiopatologia , Fonética , Leitura , Percepção da Fala/fisiologiaRESUMO
How does cognitive inhibition influence speaking? The Stroop effect is a classic demonstration of the interference between reading and color naming. We used a novel variant of the Stroop task to measure whether this interference impacts not only the response speed, but also the acoustic properties of speech. Speakers named the color of words in three categories: congruent (e.g., red written in red), color-incongruent (e.g., green written in red), and vowel-incongruent - those with partial phonological overlap with their color (e.g., rid written in red, grain in green, and blow in blue). Our primary aim was to identify any effect of the distractor vowel on the acoustics of the target vowel. Participants were no slower to respond on vowel-incongruent trials, but formant trajectories tended to show a bias away from the distractor vowel, consistent with a phenomenon of acoustic inhibition that increases contrast between confusable alternatives.
Assuntos
Inibição Psicológica , Tempo de Reação , Fala , Teste de Stroop , Humanos , Masculino , Fala/fisiologia , Feminino , Tempo de Reação/fisiologia , Adulto , Adulto Jovem , Leitura , Fonética , Atenção/fisiologiaRESUMO
Separable input and output phonological working memory (WM) capacities have been proposed, with the input capacity supporting speech recognition and the output capacity supporting production. We examined the role of input vs. output phonological WM in narrative production, examining speech rate and pronoun ratio - two measures with prior evidence of a relation to phonological WM. For speech rate, a case series approach with individuals with aphasia found no significant independent contribution of input or output phonological WM capacity after controlling for single-word production. For pronoun ratio, there was some suggestion of a role for input phonological WM. Thus, neither finding supported a specific role for an output phonological buffer in speech production. In contrast, two cases demonstrating dissociations between input and output phonological WM capacities provided suggestive evidence of predicted differences in narrative production, though follow-up research is needed. Implications for case series vs. case study approaches are discussed.
Assuntos
Memória de Curto Prazo , Narração , Fala , Humanos , Memória de Curto Prazo/fisiologia , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Fala/fisiologia , Afasia/fisiopatologia , Afasia/psicologia , Fonética , Adulto , Testes NeuropsicológicosRESUMO
The exploration of naming error patterns in aphasia provides insights into the cognitive processes underlying naming performance. We investigated how semantic and phonological abilities correlate and how they influence naming performance in aphasia. Data from 296 individuals with aphasia, drawn from the Moss Aphasia Psycholinguistics Project Database, were analyzed using a structural equation model. The model incorporated latent variables for semantics and phonology and manifest variables for naming accuracy and error patterns. There was a moderate positive correlation between semantics and phonology after controlling for overall aphasia severity. Both semantic and phonological abilities influenced naming accuracy. Semantic abilities negatively related to semantic, mixed, unrelated errors, and no responses. Interestingly, phonology positively affected semantic errors. Additionally, phonological abilities negatively related to each of phonological and neologism errors. These results highlight the role of semantic and phonological skills on naming performance in aphasia and reveal a relationship between these cognitive processes.