Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Psychon Bull Rev ; 2023 Oct 17.
Article in English | MEDLINE | ID: mdl-37848661

ABSTRACT

Simulation accounts of speech perception posit that speech is covertly imitated to support perception in a top-down manner. Behaviourally, covert imitation is measured through the stimulus-response compatibility (SRC) task. In each trial of a speech SRC task, participants produce a target speech sound whilst perceiving a speech distractor that either matches the target (compatible condition) or does not (incompatible condition). The degree to which the distractor is covertly imitated is captured by the automatic imitation effect, computed as the difference in response times (RTs) between compatible and incompatible trials. Simulation accounts disagree on whether covert imitation is enhanced when speech perception is challenging or instead when the speech signal is most familiar to the speaker. To test these accounts, we conducted three experiments in which participants completed SRC tasks with native and non-native sounds. Experiment 1 uncovered larger automatic imitation effects in an SRC task with non-native sounds than with native sounds. Experiment 2 replicated the finding online, demonstrating its robustness and the applicability of speech SRC tasks online. Experiment 3 intermixed native and non-native sounds within a single SRC task to disentangle effects of perceiving non-native sounds from confounding effects of producing non-native speech actions. This last experiment confirmed that automatic imitation is enhanced for non-native speech distractors, supporting a compensatory function of covert imitation in speech perception. The experiment also uncovered a separate effect of producing non-native speech actions on enhancing automatic imitation effects.

2.
Psychon Bull Rev ; 26(5): 1711-1718, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31197755

ABSTRACT

The observation-execution links underlying automatic-imitation processes are suggested to result from associative sensorimotor experience of performing and watching the same actions. Past research supporting the associative sequence learning (ASL) model has demonstrated that sensorimotor training modulates automatic imitation of perceptually transparent manual actions, but ASL has been criticized for not being able to account for opaque actions, such as orofacial movements that include visual speech. To investigate whether the observation-execution links underlying opaque actions are as flexible as has been demonstrated for transparent actions, we tested whether sensorimotor training modulated the automatic imitation of visual speech. Automatic imitation was defined as a facilitation in response times for syllable articulation (ba or da) when in the presence of a compatible visual speech distractor, relative to when in the presence of an incompatible distractor. Participants received either mirror (say /ba/ when the speaker silently says /ba/, and likewise for /da/) or countermirror (say /da/ when the speaker silently says /ba/, and vice versa) training, and automatic imitation was measured before and after training. The automatic-imitation effect was enhanced following mirror training and reduced following countermirror training, suggesting that sensorimotor learning plays a critical role in linking speech perception and production, and that the links between these two systems remain flexible in adulthood. Additionally, as compared to manual movements, automatic imitation of speech was susceptible to mirror training but was relatively resilient to countermirror training. We propose that social factors and the multimodal nature of speech might account for the resilience to countermirror training of sensorimotor associations of speech actions.


Subject(s)
Attention/physiology , Imitative Behavior/physiology , Learning/physiology , Speech Perception/physiology , Speech/physiology , Visual Perception/physiology , Adult , Female , Humans , Male
3.
Phonetica ; 76(2-3): 142-162, 2019.
Article in English | MEDLINE | ID: mdl-31112959

ABSTRACT

As a result of complex international migration patterns, listeners in large urban centres such as London, UK, likely encounter large amounts of variation in spoken language. However, although dealing with variation is crucial to communication, relatively little is known about how the ability to do this develops. Still less is known about how this might be affected by language background. The current study investigates whether early experience with variation, specifically growing up bilingually in London, affects accent categorization. Sixty children (30 monolingual, 30 bilingual) aged 5-7 years, were tested in their ability to comprehend and categorize talkers in 2 out of 3 accents: a home, unfamiliar regional and unfamiliar foreign-accented variety. All children demonstrated high, above-chance performance in the comprehension task, but language background significantly affected the children's ability to categorize talkers. Bilinguals were able to categorize talkers in all accent conditions, but although all children were able to understand the talkers, monolingual children were only able to categorize talkers in the home-foreign accent condition. Overall, the results are consistent with an approach in which gradient representations of accent variation emerge alongside an understanding of how variation is used meaningfully within a child's environment.


Subject(s)
Child Language , Comprehension , Multilingualism , Speech Perception , Child , Child, Preschool , Female , Humans , Linguistics , Logistic Models , London , Male , Speech Intelligibility
4.
Neuroimage ; 159: 18-31, 2017 10 01.
Article in English | MEDLINE | ID: mdl-28669904

ABSTRACT

Sensorimotor transformation (ST) may be a critical process in mapping perceived speech input onto non-native (L2) phonemes, in support of subsequent speech production. Yet, little is known concerning the role of ST with respect to L2 speech, particularly where learned L2 phones (e.g., vowels) must be produced in more complex lexical contexts (e.g., multi-syllabic words). Here, we charted the behavioral and neural outcomes of producing trained L2 vowels at word level, using a speech imitation paradigm and functional MRI. We asked whether participants would be able to faithfully imitate trained L2 vowels when they occurred in non-words of varying complexity (one or three syllables). Moreover, we related individual differences in imitation success during training to BOLD activation during ST (i.e., pre-imitation listening), and during later imitation. We predicted that superior temporal and peri-Sylvian speech regions would show increased activation as a function of item complexity and non-nativeness of vowels, during ST. We further anticipated that pre-scan acoustic learning performance would predict BOLD activation for non-native (vs. native) speech during ST and imitation. We found individual differences in imitation success for training on the non-native vowel tokens in isolation; these were preserved in a subsequent task, during imitation of mono- and trisyllabic words containing those vowels. fMRI data revealed a widespread network involved in ST, modulated by both vowel nativeness and utterance complexity: superior temporal activation increased monotonically with complexity, showing greater activation for non-native than native vowels when presented in isolation and in trisyllables, but not in monosyllables. Individual differences analyses showed that learning versus lack of improvement on the non-native vowel during pre-scan training predicted increased ST activation for non-native compared with native items, at insular cortex, pre-SMA/SMA, and cerebellum. Our results hold implications for the importance of ST as a process underlying successful imitation of non-native speech.


Subject(s)
Brain/physiology , Learning/physiology , Multilingualism , Speech/physiology , Adolescent , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Young Adult
5.
Cereb Cortex ; 27(5): 3064-3079, 2017 05 01.
Article in English | MEDLINE | ID: mdl-28334401

ABSTRACT

Imitating speech necessitates the transformation from sensory targets to vocal tract motor output, yet little is known about the representational basis of this process in the human brain. Here, we address this question by using real-time MR imaging (rtMRI) of the vocal tract and functional MRI (fMRI) of the brain in a speech imitation paradigm. Participants trained on imitating a native vowel and a similar nonnative vowel that required lip rounding. Later, participants imitated these vowels and an untrained vowel pair during separate fMRI and rtMRI runs. Univariate fMRI analyses revealed that regions including left inferior frontal gyrus were more active during sensorimotor transformation (ST) and production of nonnative vowels, compared with native vowels; further, ST for nonnative vowels activated somatomotor cortex bilaterally, compared with ST of native vowels. Using test representational similarity analysis (RSA) models constructed from participants' vocal tract images and from stimulus formant distances, we found that RSA searchlight analyses of fMRI data showed either type of model could be represented in somatomotor, temporal, cerebellar, and hippocampal neural activation patterns during ST. We thus provide the first evidence of widespread and robust cortical and subcortical neural representation of vocal tract and/or formant parameters, during prearticulatory ST.


Subject(s)
Brain Mapping , Larynx/diagnostic imaging , Lip/diagnostic imaging , Sensorimotor Cortex/physiology , Speech/physiology , Tongue/diagnostic imaging , Adult , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Oxygen/blood , Palate, Soft/diagnostic imaging , Sensorimotor Cortex/diagnostic imaging , Speech Acoustics , Young Adult
6.
J Deaf Stud Deaf Educ ; 21(1): 70-82, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26405209

ABSTRACT

Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow, Manchester, and Newcastle and paired with the same conversational partner. Participants completed a "spot-the-difference" task which elicited a considerable amount of contrasting regionally specific sign data in the participant-confederate dyads. Accommodation was observed during the task with younger signers accommodating more than older signers. The results are interpreted with reference to the relationship between language contact and lexical accommodation in BSL, and address how further studies could help us better understand how contact and accommodation contribute to language change more generally.


Subject(s)
Hearing Loss/rehabilitation , Persons With Hearing Impairments/rehabilitation , Residence Characteristics , Sign Language , Adolescent , Adult , Comprehension , Empirical Research , Female , Humans , Male , Middle Aged , Social Environment , United Kingdom , Young Adult
7.
Child Dev ; 85(5): 1965-80, 2014.
Article in English | MEDLINE | ID: mdl-25123987

ABSTRACT

The majority of bilingual speech research has focused on simultaneous bilinguals. Yet, in immigrant communities, children are often initially exposed to their family language (L1), before becoming gradually immersed in the host country's language (L2). This is typically referred to as sequential bilingualism. Using a longitudinal design, this study explored the perception and production of the English voicing contrast in 55 children (40 Sylheti-English sequential bilinguals and 15 English monolinguals). Children were tested twice: when they were in nursery (52-month-olds) and 1 year later. Sequential bilinguals' perception and production of English plosives were initially driven by their experience with their L1, but after starting school, changed to match that of their monolingual peers.


Subject(s)
Multilingualism , Speech Perception/physiology , Speech Production Measurement , Speech , Child , Child, Preschool , Female , Humans , Longitudinal Studies , Male , Time Factors
8.
J Acoust Soc Am ; 126(2): 866-77, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19640051

ABSTRACT

This study investigated whether individuals with small and large native-language (L1) vowel inventories learn second-language (L2) vowel systems differently, in order to better understand how L1 categories interfere with new vowel learning. Listener groups whose L1 was Spanish (5 vowels) or German (18 vowels) were given five sessions of high-variability auditory training for English vowels, after having been matched to assess their pre-test English vowel identification accuracy. Listeners were tested before and after training in terms of their identification accuracy for English vowels, the assimilation of these vowels into their L1 vowel categories, and their best exemplars for English (i.e., perceptual vowel space map). The results demonstrated that Germans improved more than Spanish speakers, despite the Germans' more crowded L1 vowel space. A subsequent experiment demonstrated that Spanish listeners were able to improve as much as the German group after an additional ten sessions of training, and that both groups were able to retain this learning. The findings suggest that a larger vowel category inventory may facilitate new learning, and support a hypothesis that auditory training improves identification by making the application of existing categories to L2 phonemes more automatic and efficient.


Subject(s)
Language , Learning , Multilingualism , Phonetics , Adult , Analysis of Variance , Cluster Analysis , Humans , Language Tests , Recognition, Psychology , Retention, Psychology , Speech , Speech Perception , Time Factors , Young Adult
9.
J Exp Psychol Hum Percept Perform ; 35(2): 520-9, 2009 Apr.
Article in English | MEDLINE | ID: mdl-19331505

ABSTRACT

This study aimed to determine the relative processing cost associated with comprehension of an unfamiliar native accent under adverse listening conditions. Two sentence verification experiments were conducted in which listeners heard sentences at various signal-to-noise ratios. In Experiment 1, these sentences were spoken in a familiar or an unfamiliar native accent or in two familiar native accents. In Experiment 2, they were spoken in a familiar or unfamiliar native accent or in a nonnative accent. The results indicated that the differences between the native accents influenced the speed of language processing under adverse listening conditions and that this processing speed was modulated by the relative familiarity of the listener with the native accent. Furthermore, the results showed that the processing cost associated with the nonnative accent was larger than for the unfamiliar native accent.


Subject(s)
Comprehension , Field Dependence-Independence , Phonetics , Recognition, Psychology , Speech Perception , Adult , Analysis of Variance , Female , Humans , Male , Noise , Perceptual Masking , Reference Values , Speech Discrimination Tests , Speech Intelligibility , Young Adult
10.
J Exp Psychol Hum Percept Perform ; 34(5): 1305-16, 2008 Oct.
Article in English | MEDLINE | ID: mdl-18823213

ABSTRACT

The present study investigated the perception and production of English /w/ and /v/ by native speakers of Sinhala, German, and Dutch, with the aim of examining how their native language phonetic processing affected the acquisition of these phonemes. Subjects performed a battery of tests that assessed their identification accuracy for natural recordings, their degree of spoken accent, their relative use of place and manner cues, the assimilation of these phonemes into native-language categories, and their perceptual maps (i.e., multidimensional scaling solutions) for these phonemes. Most Sinhala speakers had near-chance identification accuracy, Germans ranged from chance to 100% correct, and Dutch speakers had uniformly high accuracy. The results suggest that these learning differences were caused more by perceptual interference than by category assimilation; Sinhala and German speakers both have a single native-language phoneme that is similar to English /w/ and /v/, but the auditory sensitivities of Sinhala speakers make it harder for them to discern the acoustic cues that are critical to /w/-/v/ categorization.


Subject(s)
Learning , Multilingualism , Phonetics , Speech Acoustics , Speech Perception , Adolescent , Adult , Aged , Humans , Middle Aged , Psycholinguistics
11.
J Acoust Soc Am ; 121(6): 3814-26, 2007 Jun.
Article in English | MEDLINE | ID: mdl-17552729

ABSTRACT

This study investigated changes in vowel production and perception among university students from the north of England, as individuals adapt their accent from regional to educated norms. Subjects were tested in their production and perception at regular intervals over a period of 2 years: before beginning university, 3 months later, and at the end of their first and second years at university. At each testing session, subjects were recorded reading a set of experimental words and a short passage. Subjects also completed two perceptual tasks; they chose best exemplar locations for vowels embedded in either northern or southern English accented carrier sentences and identified words in noise spoken with either a northern or southern English accent. The results demonstrated that subjects at a late stage in their language development, early adulthood, changed their spoken accent after attending university. There were no reliable changes in perception over time, but there was evidence for a between-subjects link between production and perception; subjects chose similar vowels to the ones they produced, and subjects who had a more southern English accent were better at identifying southern English speech in noise.


Subject(s)
Phonation , Phonetics , Speech Perception/physiology , Adult , England , Humans , Perception , Students , Universities
12.
J Acoust Soc Am ; 122(5): 2842-54, 2007 Nov.
Article in English | MEDLINE | ID: mdl-18189574

ABSTRACT

This study examined whether individuals with a wide range of first-language vowel systems (Spanish, French, German, and Norwegian) differ fundamentally in the cues that they use when they learn the English vowel system (e.g., formant movement and duration). All subjects: (1) identified natural English vowels in quiet; (2) identified English vowels in noise that had been signal processed to flatten formant movement or equate duration; (3) perceptually mapped best exemplars for first- and second-language synthetic vowels in a five-dimensional vowel space that included formant movement and duration; and (4) rated how natural English vowels assimilated into their L1 vowel categories. The results demonstrated that individuals with larger and more complex first-language vowel systems (German and Norwegian) were more accurate at recognizing English vowels than were individuals with smaller first-language systems (Spanish and French). However, there were no fundamental differences in what these individuals learned. That is, all groups used formant movement and duration to recognize English vowels, and learned new aspects of the English vowel system rather than simply assimilating vowels into existing first-language categories. The results suggest that there is a surprising degree of uniformity in the ways that individuals with different language backgrounds perceive second language vowels.


Subject(s)
Language , Learning , Phonetics , Speech Acoustics , Speech Perception , Adult , Humans , Middle Aged , Noise
13.
J Acoust Soc Am ; 120(6): 3998-4006, 2006 Dec.
Article in English | MEDLINE | ID: mdl-17225426

ABSTRACT

Previous work has demonstrated that normal-hearing individuals use fine-grained phonetic variation, such as formant movement and duration, when recognizing English vowels. The present study investigated whether these cues are used by adult postlingually deafened cochlear implant users, and normal-hearing individuals listening to noise-vocoder simulations of cochlear implant processing. In Experiment 1, subjects gave forced-choice identification judgments for recordings of vowels that were signal processed to remove formant movement and/or equate vowel duration. In Experiment 2, a goodness-optimization procedure was used to create perceptual vowel space maps (i.e., best exemplars within a vowel quadrilateral) that included F1, F2, formant movement, and duration. The results demonstrated that both cochlear implant users and normal-hearing individuals use formant movement and duration cues when recognizing English vowels. Moreover, both listener groups used these cues to the same extent, suggesting that postlingually deafened cochlear implant users have category representations for vowels that are similar to those of normal-hearing individuals.


Subject(s)
Cochlear Implants , Noise , Phonation , Phonetics , Recognition, Psychology , Speech Perception , Adult , Aged , Deafness/therapy , Female , Humans , Male , Middle Aged , Time Factors
14.
J Acoust Soc Am ; 115(1): 352-61, 2004 Jan.
Article in English | MEDLINE | ID: mdl-14759027

ABSTRACT

Two experiments investigated whether listeners change their vowel categorization decisions to adjust to different accents of British English. Listeners from different regions of England gave goodness ratings on synthesized vowels embedded in natural carrier sentences that were spoken with either a northern or southern English accent. A computer minimization algorithm adjusted F1, F2, F3, and duration on successive trials according to listeners' goodness ratings, until the best exemplar of each vowel was found. The results demonstrated that most listeners adjusted their vowel categorization decisions based on the accent of the carrier sentence. The patterns of perceptual normalization were affected by individual differences in language background (e.g., whether the individuals grew up in the north or south of England), and were linked to the changes in production that speakers typically make due to sociolinguistic factors when living in multidialectal environments.


Subject(s)
Attention , Language , Phonetics , Social Environment , Speech Acoustics , Speech Perception , Adult , England , Female , Humans , Individuality , Male , Sound Spectrography , Speech Discrimination Tests
SELECTION OF CITATIONS
SEARCH DETAIL
...