RESUMO
Spoken language comprehension is a fundamental component of our cognitive skills. We are quite proficient at deciphering words from the auditory input despite the fact that the speech we hear is often masked by noise such as background babble originating from talkers other than the one we are attending to. To perceive spoken language as intended, we rely on prior linguistic knowledge and context. Prior knowledge includes all sounds and words that are familiar to a listener and depends on linguistic experience. For bilinguals, the phonetic and lexical repertoire encompasses two languages, and the degree of overlap between word forms across languages affects the degree to which they influence one another during auditory word recognition. To support spoken word recognition, listeners often rely on semantic information (i.e., the words we hear are usually related in a meaningful way). Although the number of multilinguals across the globe is increasing, little is known about how crosslinguistic effects (i.e., word overlap) interact with semantic context and affect the flexible neural systems that support accurate word recognition. The current multi-echo functional magnetic resonance imaging (fMRI) study addresses this question by examining how prime-target word pair semantic relationships interact with the target word's form similarity (cognate status) to the translation equivalent in the dominant language (L1) during accurate word recognition of a non-dominant (L2) language. We tested 26 early-proficient Spanish-Basque (L1-L2) bilinguals. When L2 targets matching L1 translation-equivalent phonological word forms were preceded by unrelated semantic contexts that drive lexical competition, a flexible language control (fronto-parietal-subcortical) network was upregulated, whereas when they were preceded by related semantic contexts that reduce lexical competition, it was downregulated. We conclude that an interplay between semantic and crosslinguistic effects regulates flexible control mechanisms of speech processing to facilitate L2 word recognition, in noise.
Assuntos
Córtex Cerebral/fisiologia , Multilinguismo , Rede Nervosa/fisiologia , Psicolinguística , Reconhecimento Psicológico/fisiologia , Percepção da Fala/fisiologia , Adulto , Mapeamento Encefálico , Córtex Cerebral/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Rede Nervosa/diagnóstico por imagem , Semântica , Adulto JovemRESUMO
Human speech perception rapidly adapts to maintain comprehension under adverse listening conditions. For example, with exposure listeners can adapt to heavily accented speech produced by a non-native speaker. Outside the domain of speech perception, adaptive changes in sensory and motor processing have been attributed to cerebellar functions. The present functional magnetic resonance imaging study investigates whether adaptation in speech perception also involves the cerebellum. Acoustic stimuli were distorted using a vocoding plus spectral-shift manipulation and presented in a word recognition task. Regions in the cerebellum that showed differences before versus after adaptation were identified, and the relationship between activity during adaptation and subsequent behavioral improvements was examined. These analyses implicated the right Crus I region of the cerebellum in adaptive changes in speech perception. A functional correlation analysis with the right Crus I as a seed region probed for cerebral cortical regions with covarying hemodynamic responses during the adaptation period. The results provided evidence of a functional network between the cerebellum and language-related regions in the temporal and parietal lobes of the cerebral cortex. Consistent with known cerebellar contributions to sensorimotor adaptation, cerebro-cerebellar interactions may support supervised learning mechanisms that rely on sensory prediction error signals in speech perception.
Assuntos
Adaptação Fisiológica/fisiologia , Adaptação Psicológica/fisiologia , Cerebelo/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Mapeamento Encefálico , Circulação Cerebrovascular/fisiologia , Potenciais Evocados , Feminino , Humanos , Testes de Linguagem , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Oxigênio/sangue , Reconhecimento Fisiológico de Modelo/fisiologia , Espectrografia do Som , Fala , Adulto JovemRESUMO
Over the past few decades, research into the function of the cerebellum has expanded far beyond the motor domain. A growing number of studies are probing the role of specific cerebellar subregions, such as Crus I and Crus II, in higher-order cognitive functions including receptive language processing. In the current fMRI study, we show evidence for the cerebellum's sensitivity to variation in two well-studied psycholinguistic properties of words-lexical frequency and phonological neighborhood density-during passive, continuous listening of a podcast. To determine whether, and how, activity in the cerebellum correlates with these lexical properties, we modeled each word separately using an amplitude-modulated regressor, time-locked to the onset of each word. At the group level, significant effects of both lexical properties landed in expected cerebellar subregions: Crus I and Crus II. The BOLD signal correlated with variation in each lexical property, consistent with both language-specific and domain-general mechanisms. Activation patterns at the individual level also showed that effects of phonological neighborhood and lexical frequency landed in Crus I and Crus II as the most probable sites, though there was activation seen in other lobules (especially for frequency). Although the exact cerebellar mechanisms used during speech and language processing are not yet evident, these findings highlight the cerebellum's role in word-level processing during continuous listening.
RESUMO
Listeners' perception of acoustically presented speech is constrained by many different sources of information that arise from other sensory modalities and from more abstract higher-level language context. An open question is how perceptual processes are influenced by and interact with these other sources of information. In this study, we use fMRI to examine the effect of a prior sentence fragment meaning on the categorization of two possible target words that differ in an acoustic phonetic feature of the initial consonant, VOT. Specifically, we manipulate the bias of the sentence context (biased, neutral) and the target type (ambiguous, unambiguous). Our results show that an interaction between these two factors emerged in a cluster in temporal cortex encompassing the left middle temporal gyrus and the superior temporal gyrus. The locus and pattern of these interactions support an interactive view of speech processing and suggest that both the quality of the input and the potential bias of the context together interact and modulate neural activation patterns.
Assuntos
Mapeamento Encefálico , Fonética , Semântica , Lobo Temporal/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Atenção/fisiologia , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio , Tempo de Reação/fisiologia , Lobo Temporal/irrigação sanguínea , Adulto JovemRESUMO
PURPOSE: This study investigates whether crosslinguistic effects on auditory word recognition are modulated by the quality of the auditory signal (clear and noisy). METHOD: In an online experiment, a group of Spanish-English bilingual listeners performed an auditory lexical decision task, in their second language, English. Words and pseudowords were either presented in the clear or were embedded in white auditory noise. Target words were varied in the degree to which they overlapped in their phonological form with their translation equivalents and were categorized according to their overlap as cognates (form and meaning) or noncognates (meaning only). In order to test for effects of crosslinguistic competition, the phonological neighborhood density of the targets' translations was also manipulated. RESULTS: The results show that crosslinguistic effects are impacted by noise; when the translation had a high neighborhood density, performance was worse for cognates than for noncognates, especially in noise. CONCLUSIONS: The findings suggest that noise increases lexical competition across languages, as it does within a language, and that the crosslinguistic phonological overlap for cognates compared with noncognates can further increase the pool of competitors by co-activating crosslinguistic lexical candidates. The results are discussed within the context of the bilingual word recognition literature and models of language and bilingual lexical processing.
Assuntos
Multilinguismo , Humanos , Idioma , Linguística , Ruído , TraduçõesRESUMO
Purpose Morse code as a form of communication became widely used for telegraphy, radio and maritime communication, and military operations, and remains popular with ham radio operators. Some skilled users of Morse code are able to comprehend a full sentence as they listen to it, while others must first transcribe the sentence into its written letter sequence. Morse thus provides an interesting opportunity to examine comprehension differences in the context of skilled acoustic perception. Measures of comprehension and short-term memory show a strong correlation across multiple forms of communication. This study tests whether this relationship holds for Morse and investigates its underlying basis. Our analyses examine Morse and speech immediate serial recall, focusing on established markers of echoic storage, phonological-articulatory coding, and lexical-semantic support. We show a relationship between Morse short-term memory and Morse comprehension that is not explained by Morse perceptual fluency. In addition, we find that poorer serial recall for Morse compared to speech is primarily due to poorer item memory for Morse, indicating differences in lexical-semantic support. Interestingly, individual differences in speech item memory are also predictive of individual differences in Morse comprehension. Conclusions We point to a psycholinguistic framework to account for these results, concluding that Morse functions like "reading for the ears" (Maier et al., 2004) and that underlying differences in the integration of phonological and lexical-semantic knowledge impact both short-term memory and comprehension. The results provide insight into individual differences in the comprehension of degraded speech and strategies that build comprehension through listening experience. Supplemental Material https://doi.org/10.23641/asha.16451868.
Assuntos
Memória de Curto Prazo , Percepção da Fala , Compreensão , Humanos , Rememoração Mental , FalaRESUMO
Speaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions. Using contrasts between speech conditions with controlled respiratory behavior, this fMRI study investigates articulatory gestures involving the tongue, lips and velum (i.e., alveolars versus bilabials, and nasals versus orals), and phonatory gestures (i.e., voiced versus whispered speech). Multivariate pattern analysis (MVPA) was used to decode articulatory gestures in M1, cerebellum and basal ganglia. Furthermore, apart from confirming the role of a mid-M1 region for phonation, we found that a dorsal M1 region, linked to respiratory control, showed significant differences for voiced compared to whispered speech despite matched lung volume observations. This region was also functionally connected to tongue and lip M1 seed regions, underlying its importance in the coordination of speech. Our study confirms and extends current knowledge regarding the neural mechanisms underlying neuromotor speech control, which hold promise to study neural dysfunctions involved in motor-speech disorders non-invasively.
Assuntos
Mapeamento Encefálico , Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/fisiologia , Imageamento por Ressonância Magnética , Fonação , Fala , Adulto , Conectoma , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Modelos Biológicos , Córtex Motor/fisiologia , Reprodutibilidade dos Testes , Adulto JovemRESUMO
In spoken word recognition, subphonemic variation influences lexical activation, with sounds near a category boundary increasing phonetic competition as well as lexical competition. The current study investigated the interplay of these factors using a visual world task in which participants were instructed to look at a picture of an auditory target (e.g., peacock). Eyetracking data indicated that participants were slowed when a voiced onset competitor (e.g., beaker) was also displayed, and this effect was amplified when acoustic-phonetic competition was increased. Simultaneously-collected fMRI data showed that several brain regions were sensitive to the presence of the onset competitor, including the supramarginal, middle temporal, and inferior frontal gyri, and functional connectivity analyses revealed that the coordinated activity of left frontal regions depends on both acoustic-phonetic and lexical factors. Taken together, results suggest a role for frontal brain structures in resolving lexical competition, particularly as atypical acoustic-phonetic information maps on to the lexicon.
RESUMO
This study examines cross-modality effects of a semantically-biased written sentence context on the perception of an acoustically-ambiguous word target identifying neural areas sensitive to interactions between sentential bias and phonetic ambiguity. Of interest is whether the locus or nature of the interactions resembles those previously demonstrated for auditory-only effects. FMRI results show significant interaction effects in right mid-middle temporal gyrus (RmMTG) and bilateral anterior superior temporal gyri (aSTG), regions along the ventral language comprehension stream that map sound onto meaning. These regions are more anterior than those previously identified for auditory-only effects; however, the same cross-over interaction pattern emerged implying similar underlying computations at play. The findings suggest that the mechanisms that integrate information across modality and across sentence and phonetic levels of processing recruit amodal areas where reading and spoken lexical and semantic access converge. Taken together, results support interactive accounts of speech and language processing.
Assuntos
Fonética , Leitura , Semântica , Acústica da Fala , Lobo Temporal/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Percepção da FalaRESUMO
Research has implicated the left inferior frontal gyrus (LIFG) in mapping acoustic-phonetic input to sound category representations, both in native speech perception and non-native phonetic category learning. At issue is whether this sensitivity reflects access to phonetic category information per se or to explicit category labels, the latter often being required by experimental procedures. The current study employed an incidental learning paradigm designed to increase sensitivity to a difficult non-native phonetic contrast without inducing explicit awareness of the categorical nature of the stimuli. Functional MRI scans revealed frontal sensitivity to phonetic category structure both before and after learning. Additionally, individuals who succeeded most on the learning task showed the largest increases in frontal recruitment after learning. Overall, results suggest that processing novel phonetic category information entails a reliance on frontal brain regions, even in the absence of explicit category labels.
Assuntos
Encéfalo/fisiologia , Idioma , Fonética , Percepção da Fala/fisiologia , Aprendizagem Verbal/fisiologia , Acústica , Adulto , Encéfalo/diagnóstico por imagem , Feminino , Lobo Frontal/diagnóstico por imagem , Lobo Frontal/fisiologia , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Córtex Pré-Frontal/diagnóstico por imagem , Córtex Pré-Frontal/fisiologia , SomRESUMO
When listeners encounter speech under adverse listening conditions, adaptive adjustments in perception can improve comprehension over time. In some cases, these adaptive changes require the presence of external information that disambiguates the distorted speech signals, whereas in other cases mere exposure is sufficient. Both external (e.g., written feedback) and internal (e.g., prior word knowledge) sources of information can be used to generate predictions about the correct mapping of a distorted speech signal. We hypothesize that these predictions provide a basis for determining the discrepancy between the expected and actual speech signal that can be used to guide adaptive changes in perception. This study provides the first empirical investigation that manipulates external and internal factors through (a) the availability of explicit external disambiguating information via the presence or absence of postresponse orthographic information paired with a repetition of the degraded stimulus, and (b) the accuracy of internally generated predictions; an acoustic distortion is introduced either abruptly or incrementally. The results demonstrate that the impact of external information on adaptive plasticity is contingent upon whether the intelligibility of the stimuli permits accurate internally generated predictions during exposure. External information sources enhance adaptive plasticity only when input signals are severely degraded and cannot reliably access internal predictions. This is consistent with a computational framework for adaptive plasticity in which error-driven supervised learning relies on the ability to compute sensory prediction error signals from both internal and external sources of information. (PsycINFO Database Record
Assuntos
Adaptação Fisiológica/fisiologia , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Inteligibilidade da Fala , Adulto JovemRESUMO
Prior research has shown that the perception of degraded speech is influenced by within sentence meaning and recruits one or more components of a frontal-temporal-parietal network. The goal of the current study is to examine whether the overall conceptual meaning of a sentence, made up of one set of words, influences the perception of a second acoustically degraded sentence, made up of a different set of words. Using functional magnetic resonance imaging (fMRI), we presented an acoustically clear sentence followed by an acoustically degraded sentence and manipulated the semantic relationship between them: Related in meaning (but consisting of different content words), Unrelated in meaning, or Same. Results showed that listeners' word recognition accuracy for the acoustically degraded sentences was significantly higher when the target sentence was preceded by a conceptually related compared to a conceptually unrelated sentence. Sensitivity to conceptual relationships was associated with enhanced activity in middle and inferior frontal, temporal, and parietal areas. In addition, the left middle frontal gyrus (LMFG), left inferior frontal gyrus (LIFG), and left middle temporal gyrus (LMTG) showed activity that correlated with individual performance on the Related condition. The superior temporal gyrus (STG) showed increased activation in the Same condition suggesting that it is sensitive to perceptual similarity rather than the integration of meaning between the sentence pairs. A fronto-temporo-parietal network appears to consolidate information sources across multiple levels of language (acoustic, lexical, syntactic, semantic) to build, and ultimately integrate conceptual information across sentences and facilitate the perception of a degraded speech signal. However, the nature of the sources of information that are available differentially recruit specific regions and modulate their activity within this network. Implications of these findings for the functional architecture of the network are considered.
Assuntos
Encéfalo/fisiologia , Compreensão/fisiologia , Idioma , Imageamento por Ressonância Magnética , Percepção da Fala/fisiologia , Fala/fisiologia , Adolescente , Adulto , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Adulto JovemRESUMO
Adult speech perception reflects the long-term regularities of the native language, but it is also flexible such that it accommodates and adapts to adverse listening conditions and short-term deviations from native-language norms. The purpose of this article is to examine how the broader neuroscience literature can inform and advance research efforts in understanding the neural basis of flexibility and adaptive plasticity in speech perception. Specifically, we highlight the potential role of learning algorithms that rely on prediction error signals and discuss specific neural structures that are likely to contribute to such learning. To this end, we review behavioral studies, computational accounts, and neuroimaging findings related to adaptive plasticity in speech perception. Already, a few studies have alluded to a potential role of these mechanisms in adaptive plasticity in speech perception. Furthermore, we consider research topics in neuroscience that offer insight into how perception can be adaptively tuned to short-term deviations while balancing the need to maintain stability in the perception of learned long-term regularities. Consideration of the application and limitations of these algorithms in characterizing flexible speech perception under adverse conditions promises to inform theoretical models of speech.
RESUMO
The current study explored how factors of acoustic-phonetic and lexical competition affect access to the lexical-semantic network during spoken word recognition. An auditory semantic priming lexical decision task was presented to subjects while in the MR scanner. Prime-target pairs consisted of prime words with the initial voiceless stop consonants /p/, /t/, and /k/ followed by word and nonword targets. To examine the neural consequences of lexical and sound structure competition, primes either had voiced minimal pair competitors or they did not, and they were either acoustically modified to be poorer exemplars of the voiceless phonetic category or not. Neural activation associated with semantic priming (Unrelated-Related conditions) revealed a bilateral fronto-temporo-parietal network. Within this network, clusters in the left insula/inferior frontal gyrus (IFG), left superior temporal gyrus (STG), and left posterior middle temporal gyrus (pMTG) showed sensitivity to lexical competition. The pMTG also demonstrated sensitivity to acoustic modification, and the insula/IFG showed an interaction between lexical competition and acoustic modification. These findings suggest the posterior lexical-semantic network is modulated by both acoustic-phonetic and lexical structure, and that the resolution of these two sources of competition recruits frontal structures.
Assuntos
Mapeamento Encefálico , Córtex Cerebral/irrigação sanguínea , Córtex Cerebral/fisiologia , Fonética , Semântica , Estimulação Acústica , Análise de Variância , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Tempo de Reação , Adulto JovemRESUMO
Neuropsychological findings together with recent advances in neuroanatomical and neuroimaging techniques have spurred the investigation of cerebellar contributions to cognition. One cognitive process that has been the focus of much research is working memory, in particular its verbal component. Influenced by Baddeley's cognitive theory of working memory, cerebellar activation during verbal working memory tasks has been predominantly attributed to the cerebellum's involvement in an articulatory rehearsal network. Recent neuroimaging and neuropsychological findings are inconsistent with a simple motor view of the cerebellum's function in verbal working memory. The present article examines these findings and their implications for an articulatory rehearsal proposal of cerebellar function. Moving beyond cognitive theory, we propose two alternative explanations for cerebellar involvement in verbal working memory: Error-driven adjustment and internal timing. These general theories of cerebellar function have been successfully adapted from the motor literature to explain cognitive functions of the cerebellum. We argue that these theories may also provide a useful framework to understand the non-motor contributions of the cerebellum to verbal working memory.