Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Acoust Soc Am ; 148(1): 253, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32752786

RESUMO

The present study investigated how single-talker and babble maskers affect auditory and lexical processing during native (L1) and non-native (L2) speech recognition. Electroencephalogram (EEG) recordings were made while L1 and L2 (Korean) English speakers listened to sentences in the presence of single-talker and babble maskers that were colocated or spatially separated from the target. The predictability of the sentences was manipulated to measure lexical-semantic processing (N400), and selective auditory processing of the target was assessed using neural tracking measures. The results demonstrate that intelligible single-talker maskers cause listeners to attend more to the semantic content of the targets (i.e., greater context-related N400 changes) than when targets are in babble, and that listeners track the acoustics of the target less accurately with single-talker maskers. L1 and L2 listeners both modulated their processing in this way, although L2 listeners had more difficulty with the materials overall (i.e., lower behavioral accuracy, less context-related N400 variation, more listening effort). The results demonstrate that auditory and lexical processing can be simultaneously assessed within a naturalistic speech listening task, and listeners can adjust lexical processing to more strongly track the meaning of a sentence in order to help ignore competing lexical content.


Assuntos
Percepção da Fala , Fala , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Idioma , Masculino , Mascaramento Perceptivo
2.
Behav Res Methods ; 52(2): 561-571, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31012064

RESUMO

Research into non-native (L2) speech perception has increased the need for specialized experimental materials. The Non-Native Speech Recognition (NNSR) sentences are a new large-scale set of speech recognition materials for research with L2 speakers of English at CEFR level B1 (North, Ortega, & Sheehan, 2010) and above. The set comprises 439 triplets of sentences in three related conditions: semantically predictable, neutral, and anomalous. The sentences were created by combining a strongly or weakly contextually constrained sentence frame with a congruent or anomalous final keyword, and they were matched on a number of factors during development, to maintain consistency across conditions. This article describes the development process of the NNSR sentences, along with results of speech-in-noise intelligibility testing for L2 and native English speakers. Suggestions for the sentences' application in a range of investigations and experimental designs are also discussed.


Assuntos
Percepção da Fala , Adolescente , Feminino , Humanos , Idioma , Masculino , Fala , Adulto Jovem
3.
J Acoust Soc Am ; 139(4): 1799, 2016 04.
Artigo em Inglês | MEDLINE | ID: mdl-27106328

RESUMO

Cross-language differences in speech perception have traditionally been linked to phonological categories, but it has become increasingly clear that language experience has effects beginning at early stages of perception, which blurs the accepted distinctions between general and speech-specific processing. The present experiments explored this distinction by playing stimuli to English and Japanese speakers that manipulated the acoustic form of English /r/ and /l/, in order to determine how acoustically natural and phonologically identifiable a stimulus must be for cross-language discrimination differences to emerge. Discrimination differences were found for stimuli that did not sound subjectively like speech or /r/ and /l/, but overall they were strongly linked to phonological categorization. The results thus support the view that phonological categories are an important source of cross-language differences, but also show that these differences can extend to stimuli that do not clearly sound like speech.


Assuntos
Discriminação Psicológica , Fonética , Acústica da Fala , Percepção da Fala , Estimulação Acústica , Acústica , Adolescente , Adulto , Audiometria da Fala , Humanos , Pessoa de Meia-Idade , Espectrografia do Som , Adulto Jovem
4.
J Neurophysiol ; 112(4): 792-801, 2014 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-24805076

RESUMO

Research on mammals predicts that the anterior striatum is a central component of human motor learning. However, because vocalizations in most mammals are innate, much of the neurobiology of human vocal learning has been inferred from studies on songbirds. Essential for song learning is a pathway, the homolog of mammalian cortical-basal ganglia "loops," which includes the avian striatum. The present functional magnetic resonance imaging (fMRI) study investigated adult human vocal learning, a skill that persists throughout life, albeit imperfectly given that late-acquired languages are spoken with an accent. Monolingual adult participants were scanned while repeating novel non-native words. After training on the pronunciation of half the words for 1 wk, participants underwent a second scan. During scanning there was no external feedback on performance. Activity declined sharply in left and right anterior striatum, both within and between scanning sessions, and this change was independent of training and performance. This indicates that adult speakers rapidly adapt to the novel articulatory movements, possibly by using motor sequences from their native speech to approximate those required for the novel speech sounds. Improved accuracy correlated only with activity in motor-sensory perisylvian cortex. We propose that future studies on vocal learning, using different behavioral and pharmacological manipulations, will provide insights into adult striatal plasticity and its potential for modification in both educational and clinical contexts.


Assuntos
Corpo Estriado/fisiologia , Idioma , Aprendizagem/fisiologia , Adulto , Mapeamento Encefálico , Retroalimentação Fisiológica , Feminino , Humanos , Linguística , Imageamento por Ressonância Magnética , Masculino
5.
Hum Brain Mapp ; 35(5): 1930-43, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-23723184

RESUMO

Modern neuroimaging techniques have advanced our understanding of the distributed anatomy of speech production, beyond that inferred from clinico-pathological correlations. However, much remains unknown about functional interactions between anatomically distinct components of this speech production network. One reason for this is the need to separate spatially overlapping neural signals supporting diverse cortical functions. We took three separate human functional magnetic resonance imaging (fMRI) datasets (two speech production, one "rest"). In each we decomposed the neural activity within the left posterior perisylvian speech region into discrete components. This decomposition robustly identified two overlapping spatio-temporal components, one centered on the left posterior superior temporal gyrus (pSTG), the other on the adjacent ventral anterior parietal lobe (vAPL). The pSTG was functionally connected with bilateral superior temporal and inferior frontal regions, whereas the vAPL was connected with other parietal regions, lateral and medial. Surprisingly, the components displayed spatial anti-correlation, in which the negative functional connectivity of each component overlapped with the other component's positive functional connectivity, suggesting that these two systems operate separately and possibly in competition. The speech tasks reliably modulated activity in both pSTG and vAPL suggesting they are involved in speech production, but their activity patterns dissociate in response to different speech demands. These components were also identified in subjects at "rest" and not engaged in overt speech production. These findings indicate that the neural architecture underlying speech production involves parallel distinct components that converge within posterior peri-sylvian cortex, explaining, in part, why this region is so important for speech production.


Assuntos
Mapeamento Encefálico , Lobo Parietal/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Idioma , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Oxigênio/sangue , Lobo Parietal/irrigação sanguínea , Adulto Jovem
6.
Brain ; 136(Pt 6): 1901-12, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-23715097

RESUMO

In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.


Assuntos
Estimulação Acústica/métodos , Afasia/fisiopatologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Lateralidade Funcional/fisiologia , Acidente Vascular Cerebral/fisiopatologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Afasia/epidemiologia , Feminino , Humanos , Magnetoencefalografia/métodos , Masculino , Pessoa de Meia-Idade , Acidente Vascular Cerebral/epidemiologia
7.
eNeuro ; 11(8)2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39095091

RESUMO

Adults heard recordings of two spatially separated speakers reading newspaper and magazine articles. They were asked to listen to one of them and ignore the other, and EEG was recorded to assess their neural processing. Machine learning extracted neural sources that tracked the target and distractor speakers at three levels: the acoustic envelope of speech (delta- and theta-band modulations), lexical frequency for individual words, and the contextual predictability of individual words estimated by GPT-4 and earlier lexical models. To provide a broader view of speech perception, half of the subjects completed a simultaneous visual task, and the listeners included both native and non-native English speakers. Distinct neural components were extracted for these levels of auditory and lexical processing, demonstrating that native English speakers had greater target-distractor separation compared with non-native English speakers on most measures, and that lexical processing was reduced by the visual task. Moreover, there was a novel interaction of lexical predictability and frequency with auditory processing; acoustic tracking was stronger for lexically harder words, suggesting that people listened harder to the acoustics when needed for lexical selection. This demonstrates that speech perception is not simply a feedforward process from acoustic processing to the lexicon. Rather, the adaptable context-sensitive processing long known to occur at a lexical level has broader consequences for perception, coupling with the acoustic tracking of individual speakers in noise.


Assuntos
Eletroencefalografia , Ruído , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Feminino , Masculino , Adulto , Adulto Jovem , Eletroencefalografia/métodos , Acústica da Fala , Idioma , Aprendizado de Máquina
8.
J Speech Lang Hear Res ; 66(9): 3399-3412, 2023 09 13.
Artigo em Inglês | MEDLINE | ID: mdl-37672785

RESUMO

PURPOSE: The aim of this study was to develop and validate a large Korean sentence set with varying degrees of semantic predictability that can be used for testing speech recognition and lexical processing. METHOD: Sentences differing in the degree of final-word predictability (predictable, neutral, and anomalous) were created with words selected to be suitable for both native and nonnative speakers of Korean. Semantic predictability was evaluated through a series of cloze tests in which native (n = 56) and nonnative (n = 19) speakers of Korean participated. This study also used a computer language model to evaluate final-word predictabilities; this is a novel approach that the current study adopted to reduce human effort in validating a large number of sentences, which produced results comparable to those of the cloze tests. In a speech recognition task, the sentences were presented to native (n = 23) and nonnative (n = 21) speakers of Korean in speech-shaped noise at two levels of noise. RESULTS: The results of the speech-in-noise experiment demonstrated that the intelligibility of the sentences was similar to that of related English corpora. That is, intelligibility was significantly different depending on the semantic condition, and the sentences had the right degree of difficulty for assessing intelligibility differences depending on noise levels and language experience. CONCLUSIONS: This corpus (1,021 sentences in total) adds to the target languages available in speech research and will allow researchers to investigate a range of issues in speech perception in Korean. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.24045582.


Assuntos
Semântica , Percepção da Fala , Humanos , Fala , Idioma , República da Coreia
9.
JASA Express Lett ; 3(1): 015202, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36725541

RESUMO

Japanese adults and Spanish-Catalan children received auditory phonetic training for English vowels using a novel paradigm, a version of the common children's card game Concentration. Individuals played a computer-based game in which they turned over pairs of cards to match spoken words, drawn from sets of vowel minimal pairs. The training was effective for adults, improving vowel recognition in a game that did not explicitly require identification. Children likewise improved over time on the memory card game, but not on the present generalisation task. This gamified training method can serve as a platform for examining development and perceptual learning.


Assuntos
Percepção da Fala , Humanos , Adulto , Criança , Idioma , Aprendizagem , Fonética , Generalização Psicológica
10.
J Acoust Soc Am ; 130(3): 1653-62, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21895102

RESUMO

Previous work has shown that the intelligibility of speech in noise is degraded if the speaker and listener differ in accent, in particular when there is a disparity between native (L1) and nonnative (L2) accents. This study investigated how this talker-listener interaction is modulated by L2 experience and accent similarity. L1 Southern British English, L1 French listeners with varying L2 English experience, and French-English bilinguals were tested on the recognition of English sentences mixed in speech-shaped noise that was spoken with a range of accents (French, Korean, Northern Irish, and Southern British English). The results demonstrated clear interactions of accent and experience, with the least experienced French speakers being most accurate with French-accented English, but more experienced listeners being most accurate with L1 Southern British English accents. An acoustic similarity metric was applied to the speech productions of the talkers and the listeners, and significant correlations were obtained between accent similarity and sentence intelligibility for pairs of individuals. Overall, the results suggest that L2 experience affects talker-listener accent interactions, altering both the intelligibility of different accents and the selectivity of accent processing.


Assuntos
Multilinguismo , Ruído/efeitos adversos , Mascaramento Perceptivo , Reconhecimento Psicológico , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Adolescente , Adulto , Audiometria da Fala , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
11.
J Acoust Soc Am ; 130(5): EL297-303, 2011 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22088031

RESUMO

This study examined the perceptual specialization for native-language speech sounds, by comparing native Hindi and English speakers in their perception of a graded set of English /w/-/v/ stimuli that varied in similarity to natural speech. The results demonstrated that language experience does not affect general auditory processes for these types of sounds; there were strong cross-language differences for speech stimuli, and none for stimuli that were nonspeech. However, the cross-language differences extended into a gray area of speech-like stimuli that were difficult to classify, suggesting that the specialization occurred in phonetic processing prior to categorization.


Assuntos
Multilinguismo , Fonética , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Adolescente , Adulto , Audiometria da Fala , Sinais (Psicologia) , Humanos , Adulto Jovem
12.
J Cogn Neurosci ; 22(6): 1319-32, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-19445609

RESUMO

Foreign-language learning is a prime example of a task that entails perceptual learning. The correct comprehension of foreign-language speech requires the correct recognition of speech sounds. The most difficult speech-sound contrasts for foreign-language learners often are the ones that have multiple phonetic cues, especially if the cues are weighted differently in the foreign and native languages. The present study aimed to determine whether non-native-like cue weighting could be changed by using phonetic training. Before the training, we compared the use of spectral and duration cues of English /i/ and /I/ vowels (e.g., beat vs. bit) between native Finnish and English speakers. In Finnish, duration is used phonologically to separate short and long phonemes, and therefore Finns were expected to weight duration cues more than native English speakers. The cross-linguistic differences and training effects were investigated with behavioral and electrophysiological methods, in particular by measuring the MMN brain response that has been used to probe long-term memory representations for speech sounds. The behavioral results suggested that before the training, the Finns indeed relied more on duration in vowel recognition than the native English speakers did. After the training, however, the Finns were able to use the spectral cues of the vowels more reliably than before. Accordingly, the MMN brain responses revealed that the training had enhanced the Finns' ability to preattentively process the spectral cues of the English vowels. This suggests that as a result of training, plastic changes had occurred in the weighting of phonetic cues at early processing stages in the cortex.


Assuntos
Córtex Cerebral/fisiologia , Idioma , Aprendizagem/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Adolescente , Adulto , Análise de Variância , Mapeamento Encefálico , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Multilinguismo , Processamento de Sinais Assistido por Computador
13.
J Acoust Soc Am ; 128(3): 1357-65, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-20815470

RESUMO

Previous work has shown that accents affect speech recognition accuracy in noise, with intelligibility being modulated by the similarity between the talkers' and listeners' accents, particularly in the case where they have different L1s. The present study examined the contribution of prosody to recognizing native (L1) and non-native (L2) speech in noise, and how this is affected by the listener's L2 experience. A group of monolingual English listeners and two groups of French listeners with varying L2 English experience were presented with English sentences produced by L1 and L2 (French) speakers. The stimuli were digitally processed to exchange the pitch and segment durations between recordings of the same sentences produced by different speakers (e.g., imposing a French-accented prosody onto recordings made from English speakers). The results revealed that English listeners were more accurate at recognizing L1 English with English prosody, the French inexperienced listeners were more accurate at recognizing French-accented speech with French prosody, and the French experienced listeners varied in the cues that they used depending on the noise level, showing more flexibility of processing. The use of prosodic cues in noise thus appears to be modulated by language experience and varies according to listening context.


Assuntos
Multilinguismo , Ruído , Mascaramento Perceptivo , Reconhecimento Psicológico , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Adolescente , Adulto , Audiometria da Fala , Sinais (Psicologia) , Feminino , Humanos , Masculino , Percepção da Altura Sonora , Fatores de Tempo , Adulto Jovem
14.
J Am Pharm Assoc (2003) ; 50(3): 379-83, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20452912

RESUMO

OBJECTIVE: To evaluate expanded pharmacy services designed to improve medication therapy management for hospice care in rural Minnesota. METHODS: Deidentified data were obtained from records kept by the study pharmacy as part of its normal operations. In-depth interviews of key pharmacy personnel and from each hospice care organization were conducted to identify overall themes based on their experiences. Descriptive analysis was conducted for summarizing the findings. Information gleaned from the interviews was documented and themes identified. These themes were used to provide insight for those who may wish to adopt this program for their patient populations. RESULTS: At initial enrollment into hospice care, 85% of the patients received at least one recommendation related to their medication therapy. During patients' enrollment in hospice care, the most common types of problems addressed through pharmacist consults were symptom control (65%), followed by dosage form (15%), medication management (12%), and adverse effect control (8%). CONCLUSION: Implementation and evaluation of this program showed that the structures and processes used were sound and could be transferred to other patient populations. Outcomes from the program were favorable from practitioner, organization, and patient care perspectives.


Assuntos
Comportamento Cooperativo , Cuidados Paliativos na Terminalidade da Vida/organização & administração , Conduta do Tratamento Medicamentoso/organização & administração , Serviços de Saúde Rural/organização & administração , Uso de Medicamentos , Humanos , Minnesota , Equipe de Assistência ao Paciente/organização & administração
15.
Neuroimage ; 46(1): 226-40, 2009 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-19457395

RESUMO

The present study used magnetoencephalography (MEG) to examine perceptual learning of American English /r/ and /l/ categories by Japanese adults who had limited English exposure. A training software program was developed based on the principles of infant phonetic learning, featuring systematic acoustic exaggeration, multi-talker variability, visible articulation, and adaptive listening. The program was designed to help Japanese listeners utilize an acoustic dimension relevant for phonemic categorization of /r-l/ in English. Although training did not produce native-like phonetic boundary along the /r-l/ synthetic continuum in the second language learners, success was seen in highly significant identification improvement over twelve training sessions and transfer of learning to novel stimuli. Consistent with behavioral results, pre-post MEG measures showed not only enhanced neural sensitivity to the /r-l/ distinction in the left-hemisphere mismatch field (MMF) response but also bilateral decreases in equivalent current dipole (ECD) cluster and duration measures for stimulus coding in the inferior parietal region. The learning-induced increases in neural sensitivity and efficiency were also found in distributed source analysis using Minimum Current Estimates (MCE). Furthermore, the pre-post changes exhibited significant brain-behavior correlations between speech discrimination scores and MMF amplitudes as well as between the behavioral scores and ECD measures of neural efficiency. Together, the data provide corroborating evidence that substantial neural plasticity for second-language learning in adulthood can be induced with adaptive and enriched linguistic exposure. Like the MMF, the ECD cluster and duration measures are sensitive neural markers of phonetic learning.


Assuntos
Encéfalo/fisiologia , Aprendizagem/fisiologia , Magnetoencefalografia , Plasticidade Neuronal/fisiologia , Fonética , Adulto , Feminino , Humanos , Masculino , Processamento de Sinais Assistido por Computador
16.
Cortex ; 45(4): 517-26, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19231480

RESUMO

There have been many functional imaging studies that have investigated the neural correlates of speech perception by contrasting neural responses to speech and "speech-like" but unintelligible control stimuli. A potential drawback of this approach is that intelligibility is necessarily conflated with a change in the acoustic parameters of the stimuli. The approach we have adopted is to take advantage of the mismatch response elicited by an oddball paradigm to probe neural responses in temporal lobe structures to a parametrically varied set of deviants in order to identify brain regions involved in vowel processing. Thirteen normal subjects were scanned using a functional magnetic resonance imaging (fMRI) paradigm while they listened to continuous trains of auditory stimuli. Three classes of stimuli were used: 'vowel deviants' and two classes of control stimuli: one acoustically similar ('single formants') and the other distant (tones). The acoustic differences between the standard and deviants in both the vowel and single-formant classes were designed to match each other closely. The results revealed an effect of vowel deviance in the left anterior superior temporal gyrus (aSTG). This was most significant when comparing all vowel deviants to standards, irrespective of their psychoacoustic or physical deviance. We also identified a correlation between perceptual discrimination and deviant-related activity in the dominant superior temporal sulcus (STS), although this effect was not stimulus specific. The responses to vowel deviants were in brain regions implicated in the processing of intelligible or meaningful speech, part of the so-called auditory "what" processing stream. Neural components of this pathway would be expected to respond to sudden, perhaps unexpected changes in speech signal that result in a change to narrative meaning.


Assuntos
Mapeamento Encefálico , Discriminação Psicológica/fisiologia , Fonética , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Processos Mentais/fisiologia , Vias Neurais/fisiologia , Valores de Referência , Comportamento Verbal/fisiologia
17.
J Acoust Soc Am ; 126(2): 866-77, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19640051

RESUMO

This study investigated whether individuals with small and large native-language (L1) vowel inventories learn second-language (L2) vowel systems differently, in order to better understand how L1 categories interfere with new vowel learning. Listener groups whose L1 was Spanish (5 vowels) or German (18 vowels) were given five sessions of high-variability auditory training for English vowels, after having been matched to assess their pre-test English vowel identification accuracy. Listeners were tested before and after training in terms of their identification accuracy for English vowels, the assimilation of these vowels into their L1 vowel categories, and their best exemplars for English (i.e., perceptual vowel space map). The results demonstrated that Germans improved more than Spanish speakers, despite the Germans' more crowded L1 vowel space. A subsequent experiment demonstrated that Spanish listeners were able to improve as much as the German group after an additional ten sessions of training, and that both groups were able to retain this learning. The findings suggest that a larger vowel category inventory may facilitate new learning, and support a hypothesis that auditory training improves identification by making the application of existing categories to L2 phonemes more automatic and efficient.


Assuntos
Idioma , Aprendizagem , Multilinguismo , Fonética , Adulto , Análise de Variância , Análise por Conglomerados , Humanos , Testes de Linguagem , Reconhecimento Psicológico , Retenção Psicológica , Fala , Percepção da Fala , Fatores de Tempo , Adulto Jovem
18.
J Acoust Soc Am ; 125(1): 469-79, 2009 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-19173432

RESUMO

Native speakers of Japanese often have difficulty identifying English /r/ and /l/, and it has been thought that second-language (L2) learning difficulties like this are caused by how L2 phonemes are assimilated into ones native phonological system. This study took an individual difference approach to examining this relationship by testing the category assimilation of Japanese speakers with a wide range of English /r/-/l/ identification abilities. All Japanese subjects were assessed in terms of (1) their accuracy in identifying English /r/ and /l/, (2) their assimilation of /r/ and /l/ into their Japanese flap category, (3) their production of /r/ and /l/, and (4) their best-exemplar locations for /r/, /l/, and Japanese flap in a five-dimensional set of synthetic stimuli (F1, F2, F3, closure duration, and transition duration). The results demonstrated that Japanese speakers assimilate /l/ into their flap category more strongly than they assimilate /r/. However, there was little evidence that category assimilation was predictive of English /r/-/l/ perception and production. Japanese speakers had three distinct best exemplars for /r/, /l/, and flap, and only their representation of F3 in /r/ and /l/ was predictive of identification ability.


Assuntos
Povo Asiático , Fonética , Fala , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Multilinguismo , Percepção da Fala , Medida da Produção da Fala
19.
J Speech Lang Hear Res ; 62(7): 2213-2226, 2019 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-31251681

RESUMO

Purpose The intelligibility of an accent strongly depends on the specific talker-listener pairing. To explore the causes of this phenomenon, we investigated the relationship between acoustic-phonetic similarity and accent intelligibility across native (1st language) and nonnative (2nd language) talker-listener pairings. We also used online measures to observe processing differences in quiet. Method English ( n = 16) and Spanish ( n = 16) listeners heard Standard Southern British English, Glaswegian English, and Spanish-accented English in a speech recognition task (in quiet and noise) and an electroencephalogram task (quiet only) designed to assess phonological and lexical processing. Stimuli were drawn from the nonnative speech recognition sentences ( Stringer & Iverson, 2019 ). The acoustic-phonetic similarity between listeners' accents and the 3 accents was calculated using the ACCDIST metric ( Huckvale, 2004 , 2007 ). Results Talker-listener pairing had a clear influence on accent intelligibility. This was linked to the phonetic similarity of the talkers and the listeners, but similarity could not account for all findings. The influence of talker-listener pairing on lexical processing was less clear; the N400 effect was mostly robust to accent mismatches, with some relationship to intelligibility. Conclusion These findings suggest that the influence of talker-listener pairing on intelligibility may be partly attributable to accent similarity in addition to accent familiarity. Online measures also show that differences in talker-listener accents can disrupt processing in quiet even where accents are highly intelligible.


Assuntos
Idioma , Ruído , Inteligibilidade da Fala/fisiologia , Adulto , Inglaterra , Feminino , Humanos , Masculino , Fonética , Semântica , Razão Sinal-Ruído , Espanha , Acústica da Fala , Adulto Jovem
20.
Sci Rep ; 9(1): 19592, 2019 12 20.
Artigo em Inglês | MEDLINE | ID: mdl-31862999

RESUMO

This study measured infants' neural responses for spectral changes between all pairs of a set of English vowels. In contrast to previous methods that only allow for the assessment of a few phonetic contrasts, we present a new method that allows us to assess changes in spectral sensitivity across the entire vowel space and create two-dimensional perceptual maps of the infants' vowel development. Infants aged four to eleven months were played long series of concatenated vowels, and the neural response to each vowel change was assessed using the Acoustic Change Complex (ACC) from EEG recordings. The results demonstrated that the youngest infants' responses more closely reflected the acoustic differences between the vowel pairs and reflected higher weight to first-formant variation. Older infants had less acoustically driven responses that seemed a result of selective increases in sensitivity for phonetically similar vowels. The results suggest that phonetic development may involve a perceptual warping for confusable vowels rather than uniform learning, as well as an overall increasing sensitivity to higher-frequency acoustic information.


Assuntos
Fonética , Acústica da Fala , Percepção da Fala/fisiologia , Eletroencefalografia , Feminino , Humanos , Lactente , Idioma , Aprendizagem , Masculino , Espectrografia do Som , Testes de Discriminação da Fala , Aprendizagem Verbal
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA