Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 97
Filter
Add more filters

Country/Region as subject
Publication year range
1.
PLoS Biol ; 19(2): e3001142, 2021 02.
Article in English | MEDLINE | ID: mdl-33635855

ABSTRACT

Rhythmic sensory or electrical stimulation will produce rhythmic brain responses. These rhythmic responses are often interpreted as endogenous neural oscillations aligned (or "entrained") to the stimulus rhythm. However, stimulus-aligned brain responses can also be explained as a sequence of evoked responses, which only appear regular due to the rhythmicity of the stimulus, without necessarily involving underlying neural oscillations. To distinguish evoked responses from true oscillatory activity, we tested whether rhythmic stimulation produces oscillatory responses which continue after the end of the stimulus. Such sustained effects provide evidence for true involvement of neural oscillations. In Experiment 1, we found that rhythmic intelligible, but not unintelligible speech produces oscillatory responses in magnetoencephalography (MEG) which outlast the stimulus at parietal sensors. In Experiment 2, we found that transcranial alternating current stimulation (tACS) leads to rhythmic fluctuations in speech perception outcomes after the end of electrical stimulation. We further report that the phase relation between electroencephalography (EEG) responses and rhythmic intelligible speech can predict the tACS phase that leads to most accurate speech perception. Together, we provide fundamental results for several lines of research-including neural entrainment and tACS-and reveal endogenous neural oscillations as a key underlying principle for speech perception.


Subject(s)
Brain/physiology , Speech Perception/physiology , Adult , Biological Clocks , Electroencephalography , Female , Humans , Magnetoencephalography , Male , Middle Aged , Transcranial Direct Current Stimulation
2.
J Neurosci ; 42(31): 6108-6120, 2022 08 03.
Article in English | MEDLINE | ID: mdl-35760528

ABSTRACT

Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase-locking to auditory and visual signals in MEG recordings from 14 human participants (6 females, 8 males) that reported words from single spoken sentences. We manipulated the acoustic clarity and visual speech signals such that critical speech information is present in auditory, visual, or both modalities. MEG coherence analysis revealed that both auditory and visual speech envelopes (auditory amplitude modulations and lip aperture changes) were phase-locked to 2-6 Hz brain responses in auditory and visual cortex, consistent with entrainment to syllable-rate components. Partial coherence analysis was used to separate neural responses to correlated audio-visual signals and showed non-zero phase-locking to auditory envelope in occipital cortex during audio-visual (AV) speech. Furthermore, phase-locking to auditory signals in visual cortex was enhanced for AV speech compared with audio-only speech that was matched for intelligibility. Conversely, auditory regions of the superior temporal gyrus did not show above-chance partial coherence with visual speech signals during AV conditions but did show partial coherence in visual-only conditions. Hence, visual speech enabled stronger phase-locking to auditory signals in visual areas, whereas phase-locking of visual speech in auditory regions only occurred during silent lip-reading. Differences in these cross-modal interactions between auditory and visual speech signals are interpreted in line with cross-modal predictive mechanisms during speech perception.SIGNIFICANCE STATEMENT Verbal communication in noisy environments is challenging, especially for hearing-impaired individuals. Seeing facial movements of communication partners improves speech perception when auditory signals are degraded or absent. The neural mechanisms supporting lip-reading or audio-visual benefit are not fully understood. Using MEG recordings and partial coherence analysis, we show that speech information is used differently in brain regions that respond to auditory and visual speech. While visual areas use visual speech to improve phase-locking to auditory speech signals, auditory areas do not show phase-locking to visual speech unless auditory speech is absent and visual speech is used to substitute for missing auditory signals. These findings highlight brain processes that combine visual and auditory signals to support speech understanding.


Subject(s)
Auditory Cortex , Speech Perception , Visual Cortex , Acoustic Stimulation , Auditory Cortex/physiology , Auditory Perception , Female , Humans , Lipreading , Male , Speech/physiology , Speech Perception/physiology , Visual Cortex/physiology , Visual Perception/physiology
3.
Article in English | MEDLINE | ID: mdl-37929612

ABSTRACT

BACKGROUND: The use of telepractice in aphasia research and therapy is increasing in frequency. Teleassessment in aphasia has been demonstrated to be reliable. However, neuropsychological and clinical language comprehension assessments are not always readily translatable to an online environment and people with severe language comprehension or cognitive impairments have sometimes been considered to be unsuitable for teleassessment. AIM: This project aimed to produce a battery of language comprehension teleassessments at the single word, sentence and discourse level suitable for individuals with moderate-severe language comprehension impairments. METHODS: Assessment development prioritised response consistency and clinical flexibility during testing. Teleassessments were delivered in PowerPoint over Zoom using screen sharing and remote control functions. The assessments were evaluated in 14 people with aphasia and 9 neurotypical control participants. Modifiable assessment templates are available here: https://osf.io/r6wfm/. MAIN CONTRIBUTIONS: People with aphasia were able to engage in language comprehension teleassessment with limited carer support. Only one assessment could not be completed for technical reasons. Statistical analysis revealed above chance performance in 141/151 completed assessments. CONCLUSIONS: People with aphasia, including people with moderate-severe comprehension impairments, are able to engage with teleassessment. Successful teleassessment can be supported by retaining clinical flexibility and maintaining consistent task demands. WHAT THIS PAPER ADDS: What is already known on the subject Teleassessment for aphasia is reliable but assessment of auditory comprehension is difficult to adapt to the online environment. There has been limited evaluation of the ability of people with severe aphasia to engage in auditory comprehension teleassessment. What this paper adds to existing knowledge Auditory comprehension assessment can be adapted for videoconferencing administration while maintaining clinical flexibility to support people with severe aphasia. What are the potential or actual clinical implications of this work? Teleassessment is time and cost effective and can be designed to support inclusion of severely impaired individuals.

4.
J Neurosci ; 41(32): 6919-6932, 2021 08 11.
Article in English | MEDLINE | ID: mdl-34210777

ABSTRACT

Human listeners achieve quick and effortless speech comprehension through computations of conditional probability using Bayes rule. However, the neural implementation of Bayesian perceptual inference remains unclear. Competitive-selection accounts (e.g., TRACE) propose that word recognition is achieved through direct inhibitory connections between units representing candidate words that share segments (e.g., hygiene and hijack share /haidʒ/). Manipulations that increase lexical uncertainty should increase neural responses associated with word recognition when words cannot be uniquely identified. In contrast, predictive-selection accounts (e.g., Predictive-Coding) propose that spoken word recognition involves comparing heard and predicted speech sounds and using prediction error to update lexical representations. Increased lexical uncertainty in words, such as hygiene and hijack, will increase prediction error and hence neural activity only at later time points when different segments are predicted. We collected MEG data from male and female listeners to test these two Bayesian mechanisms and used a competitor priming manipulation to change the prior probability of specific words. Lexical decision responses showed delayed recognition of target words (hygiene) following presentation of a neighboring prime word (hijack) several minutes earlier. However, this effect was not observed with pseudoword primes (higent) or targets (hijure). Crucially, MEG responses in the STG showed greater neural responses for word-primed words after the point at which they were uniquely identified (after /haidʒ/ in hygiene) but not before while similar changes were again absent for pseudowords. These findings are consistent with accounts of spoken word recognition in which neural computations of prediction error play a central role.SIGNIFICANCE STATEMENT Effective speech perception is critical to daily life and involves computations that combine speech signals with prior knowledge of spoken words (i.e., Bayesian perceptual inference). This study specifies the neural mechanisms that support spoken word recognition by testing two distinct implementations of Bayes perceptual inference. Most established theories propose direct competition between lexical units such that inhibition of irrelevant candidates leads to selection of critical words. Our results instead support predictive-selection theories (e.g., Predictive-Coding): by comparing heard and predicted speech sounds, neural computations of prediction error can help listeners continuously update lexical probabilities, allowing for more rapid word identification.


Subject(s)
Recognition, Psychology/physiology , Speech Perception/physiology , Temporal Lobe/physiology , Adult , Bayes Theorem , Comprehension/physiology , Female , Humans , Magnetoencephalography , Male , Middle Aged , Young Adult
5.
Proc Natl Acad Sci U S A ; 116(36): 17723-17728, 2019 09 03.
Article in English | MEDLINE | ID: mdl-31427523

ABSTRACT

Reading involves transforming arbitrary visual symbols into sounds and meanings. This study interrogated the neural representations in ventral occipitotemporal cortex (vOT) that support this transformation process. Twenty-four adults learned to read 2 sets of 24 novel words that shared phonemes and semantic categories but were written in different artificial orthographies. Following 2 wk of training, participants read the trained words while neural activity was measured with functional MRI. Representational similarity analysis on item pairs from the same orthography revealed that right vOT and posterior regions of left vOT were sensitive to basic visual similarity. Left vOT encoded letter identity and representations became more invariant to position along a posterior-to-anterior hierarchy. Item pairs that shared sounds or meanings, but were written in different orthographies with no letters in common, evoked similar neural patterns in anterior left vOT. These results reveal a hierarchical, posterior-to-anterior gradient in vOT, in which representations of letters become increasingly invariant to position and are transformed to convey spoken language information.


Subject(s)
Language , Magnetic Resonance Imaging , Occipital Lobe , Reading , Verbal Learning/physiology , Adolescent , Adult , Female , Humans , Male , Occipital Lobe/diagnostic imaging , Occipital Lobe/physiology
6.
Psychol Sci ; 32(4): 471-484, 2021 04.
Article in English | MEDLINE | ID: mdl-33634711

ABSTRACT

There is profound and long-standing debate over the role of explicit instruction in reading acquisition. In this research, we investigated the impact of teaching regularities in the writing system explicitly rather than relying on learners to discover these regularities through text experience alone. Over 10 days, 48 adults learned to read novel words printed in two artificial writing systems. One group learned spelling-to-sound and spelling-to-meaning regularities solely through experience with the novel words, whereas the other group received a brief session of explicit instruction on these regularities before training commenced. Results showed that virtually all participants who received instruction performed at ceiling on tests that probed generalization of underlying regularities. In contrast, despite up to 18 hr of training on the novel words, less than 25% of discovery learners performed on par with those who received instruction. These findings illustrate the dramatic impact of teaching method on outcomes during reading acquisition.


Subject(s)
Learning , Reading , Adult , Generalization, Psychological , Humans , Language , Writing
7.
J Cogn Neurosci ; 32(2): 226-240, 2020 02.
Article in English | MEDLINE | ID: mdl-31659922

ABSTRACT

Several recent studies have used transcranial alternating current stimulation (tACS) to demonstrate a causal role of neural oscillatory activity in speech processing. In particular, it has been shown that the ability to understand speech in a multi-speaker scenario or background noise depends on the timing of speech presentation relative to simultaneously applied tACS. However, it is possible that tACS did not change actual speech perception but rather auditory stream segregation. In this study, we tested whether the phase relation between tACS and the rhythm of degraded words, presented in silence, modulates word report accuracy. We found strong evidence for a tACS-induced modulation of speech perception, but only if the stimulation was applied bilaterally using ring electrodes (not for unilateral left hemisphere stimulation with square electrodes). These results were only obtained when data were analyzed using a statistical approach that was identified as optimal in a previous simulation study. The effect was driven by a phasic disruption of word report scores. Our results suggest a causal role of neural entrainment for speech perception and emphasize the importance of optimizing stimulation protocols and statistical approaches for brain stimulation research.


Subject(s)
Cerebral Cortex/physiology , Speech Perception/physiology , Transcranial Direct Current Stimulation , Adult , Female , Humans , Male , Placebos , Psychomotor Performance/physiology , Time Factors , Young Adult
8.
J Cogn Neurosci ; 32(3): 403-425, 2020 03.
Article in English | MEDLINE | ID: mdl-31682564

ABSTRACT

Semantically ambiguous words challenge speech comprehension, particularly when listeners must select a less frequent (subordinate) meaning at disambiguation. Using combined magnetoencephalography (MEG) and EEG, we measured neural responses associated with distinct cognitive operations during semantic ambiguity resolution in spoken sentences: (i) initial activation and selection of meanings in response to an ambiguous word and (ii) sentence reinterpretation in response to subsequent disambiguation to a subordinate meaning. Ambiguous words elicited an increased neural response approximately 400-800 msec after their acoustic offset compared with unambiguous control words in left frontotemporal MEG sensors, corresponding to sources in bilateral frontotemporal brain regions. This response may reflect increased demands on processes by which multiple alternative meanings are activated and maintained until later selection. Disambiguating words heard after an ambiguous word were associated with marginally increased neural activity over bilateral temporal MEG sensors and a central cluster of EEG electrodes, which localized to similar bilateral frontal and left temporal regions. This later neural response may reflect effortful semantic integration or elicitation of prediction errors that guide reinterpretation of previously selected word meanings. Across participants, the amplitude of the ambiguity response showed a marginal positive correlation with comprehension scores, suggesting that sentence comprehension benefits from additional processing around the time of an ambiguous word. Better comprehenders may have increased availability of subordinate meanings, perhaps due to higher quality lexical representations and reflected in a positive correlation between vocabulary size and comprehension success.


Subject(s)
Brain/physiology , Comprehension/physiology , Semantics , Speech Perception/physiology , Adult , Electroencephalography , Female , Humans , Magnetoencephalography , Male , Vocabulary , Young Adult
9.
J Neurosci ; 38(27): 6076-6089, 2018 07 04.
Article in English | MEDLINE | ID: mdl-29891730

ABSTRACT

Humans use prior expectations to improve perception, especially of sensory signals that are degraded or ambiguous. However, if sensory input deviates from prior expectations, then correct perception depends on adjusting or rejecting prior expectations. Failure to adjust or reject the prior leads to perceptual illusions, especially if there is partial overlap (and thus partial mismatch) between expectations and input. With speech, "slips of the ear" occur when expectations lead to misperception. For instance, an entomologist might be more susceptible to hear "The ants are my friends" for "The answer, my friend" (in the Bob Dylan song Blowing in the Wind). Here, we contrast two mechanisms by which prior expectations may lead to misperception of degraded speech. First, clear representations of the common sounds in the prior and input (i.e., expected sounds) may lead to incorrect confirmation of the prior. Second, insufficient representations of sounds that deviate between prior and input (i.e., prediction errors) could lead to deception. We used crossmodal predictions from written words that partially match degraded speech to compare neural responses when male and female human listeners were deceived into accepting the prior or correctly reject it. Combined behavioral and multivariate representational similarity analysis of fMRI data show that veridical perception of degraded speech is signaled by representations of prediction error in the left superior temporal sulcus. Instead of using top-down processes to support perception of expected sensory input, our findings suggest that the strength of neural prediction error representations distinguishes correct perception and misperception.SIGNIFICANCE STATEMENT Misperceiving spoken words is an everyday experience, with outcomes that range from shared amusement to serious miscommunication. For hearing-impaired individuals, frequent misperception can lead to social withdrawal and isolation, with severe consequences for wellbeing. In this work, we specify the neural mechanisms by which prior expectations, which are so often helpful for perception, can lead to misperception of degraded sensory signals. Most descriptive theories of illusory perception explain misperception as arising from a clear sensory representation of features or sounds that are in common between prior expectations and sensory input. Our work instead provides support for a complementary proposal: that misperception occurs when there is an insufficient sensory representations of the deviation between expectations and sensory signals.


Subject(s)
Brain/physiology , Illusions/physiology , Motivation/physiology , Speech Perception/physiology , Adolescent , Adult , Brain Mapping/methods , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
10.
J Neurosci ; 38(11): 2844-2853, 2018 03 14.
Article in English | MEDLINE | ID: mdl-29440556

ABSTRACT

Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization.SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it.


Subject(s)
Auditory Perception/physiology , Intention , Acoustic Stimulation , Adolescent , Adult , Attention/physiology , Auditory Cortex/physiology , Electroencephalography , Female , Humans , Magnetoencephalography , Male , Sound , Young Adult
11.
Neuroimage ; 202: 116175, 2019 11 15.
Article in English | MEDLINE | ID: mdl-31499178

ABSTRACT

Research on whether perception or other processes depend on the phase of neural oscillations is rapidly gaining popularity. However, it is unknown which methods are optimally suited to evaluate the hypothesized phase effect. Using a simulation approach, we here test the ability of different methods to detect such an effect on dichotomous (e.g., "hit" vs "miss") and continuous (e.g., scalp potentials) response variables. We manipulated parameters that characterise the phase effect or define the experimental approach to test for this effect. For each parameter combination and response variable, we identified an optimal method. We found that methods regressing single-trial responses on circular (sine and cosine) predictors perform best for all of the simulated parameters, regardless of the nature of the response variable (dichotomous or continuous). In sum, our study lays a foundation for optimized experimental designs and analyses in future studies investigating the role of phase for neural and behavioural responses. We provide MATLAB code for the statistical methods tested.


Subject(s)
Brain/physiology , Models, Neurological , Neurons/physiology , Perception/physiology , Computer Simulation , Data Interpretation, Statistical , Electroencephalography , Humans , Magnetoencephalography , Transcranial Direct Current Stimulation
12.
PLoS Biol ; 14(11): e1002577, 2016 Nov.
Article in English | MEDLINE | ID: mdl-27846209

ABSTRACT

Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains.


Subject(s)
Magnetic Resonance Imaging/methods , Speech Perception , Behavior , Humans , Models, Theoretical , Multivariate Analysis , Temporal Lobe/physiology
13.
Proc Natl Acad Sci U S A ; 113(12): E1747-56, 2016 Mar 22.
Article in English | MEDLINE | ID: mdl-26957596

ABSTRACT

Human perception is shaped by past experience on multiple timescales. Sudden and dramatic changes in perception occur when prior knowledge or expectations match stimulus content. These immediate effects contrast with the longer-term, more gradual improvements that are characteristic of perceptual learning. Despite extensive investigation of these two experience-dependent phenomena, there is considerable debate about whether they result from common or dissociable neural mechanisms. Here we test single- and dual-mechanism accounts of experience-dependent changes in perception using concurrent magnetoencephalographic and EEG recordings of neural responses evoked by degraded speech. When speech clarity was enhanced by prior knowledge obtained from matching text, we observed reduced neural activity in a peri-auditory region of the superior temporal gyrus (STG). Critically, longer-term improvements in the accuracy of speech recognition following perceptual learning resulted in reduced activity in a nearly identical STG region. Moreover, short-term neural changes caused by prior knowledge and longer-term neural changes arising from perceptual learning were correlated across subjects with the magnitude of learning-induced changes in recognition accuracy. These experience-dependent effects on neural processing could be dissociated from the neural effect of hearing physically clearer speech, which similarly enhanced perception but increased rather than decreased STG responses. Hence, the observed neural effects of prior knowledge and perceptual learning cannot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception. Instead, our results support a predictive coding account of speech perception; computational simulations show how a single mechanism, minimization of prediction error, can drive immediate perceptual effects of prior knowledge and longer-term perceptual learning of degraded speech.


Subject(s)
Models, Neurological , Phonetics , Speech Intelligibility , Speech Perception/physiology , Temporal Lobe/physiology , Adolescent , Adult , Brain Mapping , Computer Simulation , Electroencephalography , Female , Humans , Learning/physiology , Magnetoencephalography , Male , Multimodal Imaging , Time Factors , Young Adult
14.
J Cogn Neurosci ; 29(5): 919-936, 2017 May.
Article in English | MEDLINE | ID: mdl-28129061

ABSTRACT

Speech perception and comprehension are often challenged by the need to recognize speech sounds that are degraded or ambiguous. Here, we explore the cognitive and neural mechanisms involved in resolving ambiguity in the identity of speech sounds using syllables that contain ambiguous phonetic segments (e.g., intermediate sounds between /b/ and /g/ as in "blade" and "glade"). We used an audio-morphing procedure to create a large set of natural sounding minimal pairs that contain phonetically ambiguous onset or offset consonants (differing in place, manner, or voicing). These ambiguous segments occurred in different lexical contexts (i.e., in words or pseudowords, such as blade-glade or blem-glem) and in different phonological environments (i.e., with neighboring syllables that differed in lexical status, such as blouse-glouse). These stimuli allowed us to explore the impact of phonetic ambiguity on the speed and accuracy of lexical decision responses (Experiment 1), semantic categorization responses (Experiment 2), and the magnitude of BOLD fMRI responses during attentive comprehension (Experiment 3). For both behavioral and neural measures, observed effects of phonetic ambiguity were influenced by lexical context leading to slower responses and increased activity in the left inferior frontal gyrus for high-ambiguity syllables that distinguish pairs of words, but not for equivalent pseudowords. These findings suggest lexical involvement in the resolution of phonetic ambiguity. Implications for speech perception and the role of inferior frontal regions are discussed.


Subject(s)
Brain Mapping/methods , Comprehension/physiology , Phonetics , Prefrontal Cortex/physiology , Psycholinguistics , Speech Perception/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Prefrontal Cortex/diagnostic imaging , Recognition, Psychology/physiology , Young Adult
15.
Cogn Psychol ; 98: 73-101, 2017 11.
Article in English | MEDLINE | ID: mdl-28881224

ABSTRACT

Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access.


Subject(s)
Recognition, Psychology , Speech Perception/physiology , Speech/physiology , Adult , Comprehension , Female , Humans , Male , United Kingdom , United States
16.
Cereb Cortex ; 25(12): 4772-88, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26157026

ABSTRACT

How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds.


Subject(s)
Frontal Lobe/physiology , Sensorimotor Cortex/physiology , Speech Perception/physiology , Speech , Temporal Lobe/physiology , Adolescent , Adult , Auditory Threshold , Data Interpretation, Statistical , Female , Humans , Male , Multivariate Analysis , Neural Pathways/physiology , Noise , Young Adult
17.
J Cogn Neurosci ; 27(9): 1738-51, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25848683

ABSTRACT

Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.


Subject(s)
Brain/physiology , Pattern Recognition, Visual/physiology , Reading , Adult , Brain Mapping , Electroencephalography , Female , Humans , Language Tests , Magnetoencephalography , Male , Neuropsychological Tests , Photic Stimulation , Psycholinguistics , Time Factors
18.
Cogn Psychol ; 79: 1-39, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25898155

ABSTRACT

The extraction of general knowledge from individual episodes is critical if we are to learn new knowledge or abilities. Here we uncover some of the key cognitive mechanisms that characterise this process in the domain of language learning. In five experiments adult participants learned new morphological units embedded in fictitious words created by attaching new affixes (e.g., -afe) to familiar word stems (e.g., "sleepafe is a participant in a study about the effects of sleep"). Participants' ability to generalise semantic knowledge about the affixes was tested using tasks requiring the comprehension and production of novel words containing a trained affix (e.g., sailafe). We manipulated the delay between training and test (Experiment 1), the number of unique exemplars provided for each affix during training (Experiment 2), and the consistency of the form-to-meaning mapping of the affixes (Experiments 3-5). In a task where speeded online language processing is required (semantic priming), generalisation was achieved only after a memory consolidation opportunity following training, and only if the training included a sufficient number of unique exemplars. Semantic inconsistency disrupted speeded generalisation unless consolidation was allowed to operate on one of the two affix-meanings before introducing inconsistencies. In contrast, in tasks that required slow, deliberate reasoning, generalisation could be achieved largely irrespective of the above constraints. These findings point to two different mechanisms of generalisation that have different cognitive demands and rely on different types of memory representations.


Subject(s)
Generalization, Psychological , Language Development , Adult , Comprehension , Female , Humans , Male , Memory , Time Factors , Young Adult
19.
J Cogn Neurosci ; 26(9): 2128-54, 2014 Sep.
Article in English | MEDLINE | ID: mdl-24666161

ABSTRACT

Understanding the neural systems that underpin reading acquisition is key if neuroscientific findings are to inform educational practice. We provide a unique window into these systems by teaching 19 adults to read 24 novel words written in unfamiliar letters and to name 24 novel objects while in an MRI scanner. Behavioral performance on trained items was equivalent for the two stimulus types. However, componential letter-sound associations were extracted when learning to read, as shown by correct reading of untrained words, whereas object-name associations were holistic and arbitrary. Activity in bilateral anterior fusiform gyri was greater during object name learning than learning to read, and ROI analyses indicated that left mid-fusiform activity was predictive of success in object name learning but not in learning to read. In contrast, activity in bilateral parietal cortices was predictive of success for both stimulus types but was greater during learning and recall of written word pronunciations relative to object names. We argue that mid-to-anterior fusiform gyri preferentially process whole items and contribute to learning their spoken form associations, processes that are required for skilled reading. In contrast, parietal cortices preferentially process componential visual-verbal mappings, a process that is crucial for early reading development.


Subject(s)
Brain Mapping , Cerebral Cortex/physiology , Names , Reading , Verbal Learning/physiology , Vocabulary , Adolescent , Adult , Association Learning/physiology , Carbamide Peroxide , Cerebral Cortex/blood supply , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Mental Recall/physiology , Peroxides/blood , Reaction Time/physiology , Time Factors , Urea/analogs & derivatives , Urea/blood , Young Adult
20.
Neuroimage ; 99: 419-33, 2014 Oct 01.
Article in English | MEDLINE | ID: mdl-24904992

ABSTRACT

It has been suggested that differential neural activity in imaging studies is most informative if it is independent of response time (RT) differences. However, others view RT as a behavioural index of key cognitive processes, which is likely linked to underlying neural activity. Here, we reconcile these views using the effort and engagement framework developed by Taylor, Rastle, and Davis (2013) and data from the domain of reading aloud. We propose that differences in neural engagement should be independent of RT, whereas, differences in neural effort should co-vary with RT. We illustrate these different mechanisms using data from an fMRI study of neural activity during reading aloud of regular words, irregular words, and pseudowords. In line with our proposals, activation revealed by contrasts designed to tap differences in neural engagement (e.g., words are meaningful and therefore engage semantic representations more than pseudowords) survived correction for RT, whereas activation for contrasts designed to tap differences in neural effort (e.g., it is more difficult to generate the pronunciation of pseudowords than words) correlated with RT. However, even for contrasts designed to tap neural effort, activity remained after factoring out the RT-BOLD response correlation. This may reveal unpredicted differences in neural engagement (e.g., learning phonological forms for pseudowords>words) that could further the development of cognitive models of reading aloud. Our framework provides a theoretically well-grounded and easily implemented method for analysing and interpreting RT effects in neuroimaging studies of cognitive processes.


Subject(s)
Brain/physiology , Magnetic Resonance Imaging/methods , Reaction Time/physiology , Adolescent , Adult , Brain Mapping , Female , Humans , Image Processing, Computer-Assisted , Learning/physiology , Male , Prefrontal Cortex/physiology , Reading , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL