Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
J Cogn Neurosci ; 32(6): 1092-1103, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31933438

RESUMO

Successful perception of speech in everyday listening conditions requires effective listening strategies to overcome common acoustic distortions, such as background noise. Convergent evidence from neuroimaging and clinical studies identify activation within the temporal lobes as key to successful speech perception. However, current neurobiological models disagree on whether the left temporal lobe is sufficient for successful speech perception or whether bilateral processing is required. We addressed this issue using TMS to selectively disrupt processing in either the left or right superior temporal gyrus (STG) of healthy participants to test whether the left temporal lobe is sufficient or whether both left and right STG are essential. Participants repeated keywords from sentences presented in background noise in a speech reception threshold task while receiving online repetitive TMS separately to the left STG, right STG, or vertex or while receiving no TMS. Results show an equal drop in performance following application of TMS to either left or right STG during the task. A separate group of participants performed a visual discrimination threshold task to control for the confounding side effects of TMS. Results show no effect of TMS on the control task, supporting the notion that the results of Experiment 1 can be attributed to modulation of cortical functioning in STG rather than to side effects associated with online TMS. These results indicate that successful speech perception in everyday listening conditions requires both left and right STG and thus have ramifications for our understanding of the neural organization of spoken language processing.


Assuntos
Lateralidade Funcional/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Estimulação Magnética Transcraniana , Adolescente , Adulto , Feminino , Humanos , Masculino , Ruído , Limiar Sensorial/fisiologia , Fala/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
2.
J Acoust Soc Am ; 147(5): 3348, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32486777

RESUMO

Listening to degraded speech is associated with decreased intelligibility and increased effort. However, listeners are generally able to adapt to certain types of degradations. While intelligibility of degraded speech is modulated by talker acoustics, it is unclear whether talker acoustics also affect effort and adaptation. Moreover, it has been demonstrated that talker differences are preserved across spectral degradations, but it is not known whether this effect extends to temporal degradations and which acoustic-phonetic characteristics are responsible. In a listening experiment combined with pupillometry, participants were presented with speech in quiet as well as in masking noise, time-compressed, and noise-vocoded speech by 16 Southern British English speakers. Results showed that intelligibility, but not adaptation, was modulated by talker acoustics. Talkers who were more intelligible under noise-vocoding were also more intelligible under masking and time-compression. This effect was linked to acoustic-phonetic profiles with greater vowel space dispersion (VSD) and energy in mid-range frequencies, as well as slower speaking rate. While pupil dilation indicated increasing effort with decreasing intelligibility, this study also linked reduced effort in quiet to talkers with greater VSD. The results emphasize the relevance of talker acoustics for intelligibility and effort in degraded listening conditions.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Acústica , Humanos , Ruído , Mascaramento Perceptivo , Acústica da Fala
3.
J Acoust Soc Am ; 147(4): 2728, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32359293

RESUMO

Few studies thus far have investigated whether perception of distorted speech is consistent across different types of distortion. This study investigated whether participants show a consistent perceptual profile across three speech distortions: time-compressed, noise-vocoded, and speech in noise. Additionally, this study investigated whether/how individual differences in performance on a battery of audiological and cognitive tasks links to perception. Eighty-eight participants completed a speeded sentence-verification task with increases in accuracy and reductions in response times used to indicate performance. Audiological and cognitive task measures include pure tone audiometry, speech recognition threshold, working memory, vocabulary knowledge, attention switching, and pattern analysis. Despite previous studies suggesting that temporal and spectral/environmental perception require different lexical or phonological mechanisms, this study shows significant positive correlations in accuracy and response time performance across all distortions. Results of a principal component analysis and multiple linear regressions suggest that a component based on vocabulary knowledge and working memory predicted performance in the speech in quiet, time-compressed and speech in noise conditions. These results suggest that listeners employ a similar cognitive strategy to perceive different temporal and spectral/environmental speech distortions and that this mechanism is supported by vocabulary knowledge and working memory.


Assuntos
Percepção da Fala , Fala , Cognição , Humanos , Ruído/efeitos adversos , Testes de Discriminação da Fala
4.
Cereb Cortex ; 27(5): 3064-3079, 2017 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-28334401

RESUMO

Imitating speech necessitates the transformation from sensory targets to vocal tract motor output, yet little is known about the representational basis of this process in the human brain. Here, we address this question by using real-time MR imaging (rtMRI) of the vocal tract and functional MRI (fMRI) of the brain in a speech imitation paradigm. Participants trained on imitating a native vowel and a similar nonnative vowel that required lip rounding. Later, participants imitated these vowels and an untrained vowel pair during separate fMRI and rtMRI runs. Univariate fMRI analyses revealed that regions including left inferior frontal gyrus were more active during sensorimotor transformation (ST) and production of nonnative vowels, compared with native vowels; further, ST for nonnative vowels activated somatomotor cortex bilaterally, compared with ST of native vowels. Using test representational similarity analysis (RSA) models constructed from participants' vocal tract images and from stimulus formant distances, we found that RSA searchlight analyses of fMRI data showed either type of model could be represented in somatomotor, temporal, cerebellar, and hippocampal neural activation patterns during ST. We thus provide the first evidence of widespread and robust cortical and subcortical neural representation of vocal tract and/or formant parameters, during prearticulatory ST.


Assuntos
Mapeamento Encefálico , Laringe/diagnóstico por imagem , Lábio/diagnóstico por imagem , Córtex Sensório-Motor/fisiologia , Fala/fisiologia , Língua/diagnóstico por imagem , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Palato Mole/diagnóstico por imagem , Córtex Sensório-Motor/diagnóstico por imagem , Acústica da Fala , Adulto Jovem
5.
Neuroimage ; 159: 18-31, 2017 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-28669904

RESUMO

Sensorimotor transformation (ST) may be a critical process in mapping perceived speech input onto non-native (L2) phonemes, in support of subsequent speech production. Yet, little is known concerning the role of ST with respect to L2 speech, particularly where learned L2 phones (e.g., vowels) must be produced in more complex lexical contexts (e.g., multi-syllabic words). Here, we charted the behavioral and neural outcomes of producing trained L2 vowels at word level, using a speech imitation paradigm and functional MRI. We asked whether participants would be able to faithfully imitate trained L2 vowels when they occurred in non-words of varying complexity (one or three syllables). Moreover, we related individual differences in imitation success during training to BOLD activation during ST (i.e., pre-imitation listening), and during later imitation. We predicted that superior temporal and peri-Sylvian speech regions would show increased activation as a function of item complexity and non-nativeness of vowels, during ST. We further anticipated that pre-scan acoustic learning performance would predict BOLD activation for non-native (vs. native) speech during ST and imitation. We found individual differences in imitation success for training on the non-native vowel tokens in isolation; these were preserved in a subsequent task, during imitation of mono- and trisyllabic words containing those vowels. fMRI data revealed a widespread network involved in ST, modulated by both vowel nativeness and utterance complexity: superior temporal activation increased monotonically with complexity, showing greater activation for non-native than native vowels when presented in isolation and in trisyllables, but not in monosyllables. Individual differences analyses showed that learning versus lack of improvement on the non-native vowel during pre-scan training predicted increased ST activation for non-native compared with native items, at insular cortex, pre-SMA/SMA, and cerebellum. Our results hold implications for the importance of ST as a process underlying successful imitation of non-native speech.


Assuntos
Encéfalo/fisiologia , Aprendizagem/fisiologia , Multilinguismo , Fala/fisiologia , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Adulto Jovem
6.
Neuroimage ; 128: 218-226, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26732405

RESUMO

It has become increasingly evident that human motor circuits are active during speech perception. However, the conditions under which the motor system modulates speech perception are not clear. Two prominent accounts make distinct predictions for how listening to speech engages speech motor representations. The first account suggests that the motor system is most strongly activated when observing familiar actions (Pickering and Garrod, 2013). Conversely, Wilson and Knoblich's account asserts that motor excitability is greatest when observing less familiar, ambiguous actions (Wilson and Knoblich, 2005). We investigated these predictions using transcranial magnetic stimulation (TMS). Stimulation of the lip and hand representations in the left primary motor cortex elicited motor evoked potentials (MEPs) indexing the excitability of the underlying motor representation. MEPs for lip, but not for hand, were larger during perception of distorted speech produced using a tongue depressor, relative to naturally produced speech. Additional somatotopic facilitation yielded significantly larger MEPs during perception of lip-articulated distorted speech sounds relative to distorted tongue-articulated sounds. Critically, there was a positive correlation between MEP size and the perception of distorted speech sounds. These findings were consistent with predictions made by Wilson & Knoblich (Wilson and Knoblich, 2005), and provide direct evidence of increased motor excitability when speech perception is difficult.


Assuntos
Potencial Evocado Motor/fisiologia , Córtex Motor/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Eletromiografia , Feminino , Humanos , Lábio/inervação , Masculino , Estimulação Magnética Transcraniana , Adulto Jovem
7.
J Acoust Soc Am ; 137(4): 2015-24, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25920852

RESUMO

The present study investigated the effects of inhibition, vocabulary knowledge, and working memory on perceptual adaptation to accented speech. One hundred young, normal-hearing adults listened to sentences spoken in a constructed, unfamiliar accent presented in speech-shaped background noise. Speech Reception Thresholds (SRTs) corresponding to 50% speech recognition accuracy provided a measurement of adaptation to the accented speech. Stroop, vocabulary knowledge, and working memory tests were performed to measure cognitive ability. Participants adapted to the unfamiliar accent as revealed by a decrease in SRTs over time. Better inhibition (lower Stroop scores) predicted greater and faster adaptation to the unfamiliar accent. Vocabulary knowledge predicted better recognition of the unfamiliar accent, while working memory had a smaller, indirect effect on speech recognition mediated by vocabulary score. Results support a top-down model for successful adaptation to, and recognition of, accented speech; they add to recent theories that allocate a prominent role for executive function to effective speech comprehension in adverse listening conditions.


Assuntos
Memória de Curto Prazo/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Vocabulário , Adaptação Psicológica/fisiologia , Adolescente , Adulto , Análise de Variância , Cognição/fisiologia , Feminino , Humanos , Masculino , Fonética , Reconhecimento Psicológico/fisiologia , Adulto Jovem
8.
Artigo em Inglês | MEDLINE | ID: mdl-39085716

RESUMO

Observing actions evokes an automatic imitative response that activates mechanisms required to execute these actions. Automatic imitation is measured using the Stimulus Response Compatibility (SRC) task, which presents participants with compatible and incompatible prompt-distractor pairs. Automatic imitation, or the compatibility effect, is the difference in response times (RTs) between incompatible and compatible trials. Past results suggest that an action's animacy affects automatic imitation: human-produced actions evoke larger effects than computer-generated actions. However, it appears that animacy effects occur mostly when non-human stimuli are less complex or less clear. Theoretical accounts make conflicting predictions regarding both stimulus manipulations. We conducted two SRC experiments that presented participants with an animacy manipulation (human and computer-generated stimuli, Experiment 1) and a clarity manipulation (stimuli with varying visual clarity using Gaussian blurring, Experiments 1 and 2) to tease apart effect of these manipulations. Participants in Experiment 1 responded slower for incompatible than for compatible trials, showing a compatibility effect. Experiment 1 found a null effect of animacy, but stimuli with lower visual clarity evoked smaller compatibility effects. Experiment 2 modulated clarity in five steps and reports decreasing compatibility effects for stimuli with lower clarity. Clarity, but not animacy, therefore affected automatic imitation, and theoretical implications and future directions are considered.

9.
Psychon Bull Rev ; 30(3): 1093-1102, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36443535

RESUMO

Observing someone perform an action automatically activates neural substrates associated with executing that action. This covert response, or automatic imitation, is measured behaviourally using the stimulus-response compatibility (SRC) task. In an SRC task, participants are presented with compatible and incompatible response-distractor pairings (e.g., an instruction to say "ba" paired with an audio recording of "da" as an example of an incompatible trial). Automatic imitation is measured as the difference in response times (RT) or accuracy between incompatible and compatible trials. Larger automatic imitation effects have been interpreted as a larger covert imitation response. Past results suggest that an action's biological status affects automatic imitation: Human-produced manual actions show enhanced automatic imitation effects compared with computer-generated actions. Per the integrated theory for language comprehension and production, action observation triggers a simulation process to recognize and interpret observed speech actions involving covert imitation. Human-generated actions are predicted to result in increased automatic imitation because the simulation process is predicted to engage more for actions produced by a speaker who is more similar to the listener. We conducted an online SRC task that presented participants with human and computer-generated speech stimuli to test this prediction. Participants responded faster to compatible than incompatible trials, showing an overall automatic imitation effect. Yet the human-generated and computer-generated vocal stimuli evoked similar automatic imitation effects. These results suggest that computer-generated speech stimuli evoke the same covert imitative response as human stimuli, thus rejecting predictions from the integrated theory of language comprehension and production.


Assuntos
Comportamento Imitativo , Fala , Humanos , Comportamento Imitativo/fisiologia , Tempo de Reação , Fala/fisiologia , Computadores
10.
Psychon Bull Rev ; 2023 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-37848661

RESUMO

Simulation accounts of speech perception posit that speech is covertly imitated to support perception in a top-down manner. Behaviourally, covert imitation is measured through the stimulus-response compatibility (SRC) task. In each trial of a speech SRC task, participants produce a target speech sound whilst perceiving a speech distractor that either matches the target (compatible condition) or does not (incompatible condition). The degree to which the distractor is covertly imitated is captured by the automatic imitation effect, computed as the difference in response times (RTs) between compatible and incompatible trials. Simulation accounts disagree on whether covert imitation is enhanced when speech perception is challenging or instead when the speech signal is most familiar to the speaker. To test these accounts, we conducted three experiments in which participants completed SRC tasks with native and non-native sounds. Experiment 1 uncovered larger automatic imitation effects in an SRC task with non-native sounds than with native sounds. Experiment 2 replicated the finding online, demonstrating its robustness and the applicability of speech SRC tasks online. Experiment 3 intermixed native and non-native sounds within a single SRC task to disentangle effects of perceiving non-native sounds from confounding effects of producing non-native speech actions. This last experiment confirmed that automatic imitation is enhanced for non-native speech distractors, supporting a compensatory function of covert imitation in speech perception. The experiment also uncovered a separate effect of producing non-native speech actions on enhancing automatic imitation effects.

11.
Trends Hear ; 27: 23312165231192297, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37547940

RESUMO

Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments (N = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention.


Assuntos
Percepção da Fala , Fala , Humanos , Aprendizagem , Ruído/efeitos adversos , Idioma
12.
Neuroimage ; 63(3): 1601-13, 2012 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-22836181

RESUMO

The localisation of spoken language comprehension is debated extensively: is processing located anterior or posterior on the left temporal lobe, and is it left- or bilaterally organised? An Activation Likelihood Estimation (ALE) analysis was conducted on functional MRI and PET studies investigating speech comprehension to identify the neural network involved in comprehension processing. Furthermore, the analysis aimed to establish the effect of four design choices (scanning paradigm, non-speech baseline, the presence of a task, and the type of stimulus material) on this comprehension network. The analysis included 57 experiments contrasting intelligible with less intelligible or unintelligible stimuli. A large comprehension network was found across bilateral Superior Temporal Sulcus (STS), Middle Temporal Gyrus (MTG) and Superior Temporal (STS) bilaterally, in left Inferior Frontal Gyrus (IFG), left Precentral Gyrus, and Supplementary Motor Area (SMA) and pre-SMA. The core network for post-lexical processing was restricted to the temporal lobes bilaterally with the highest ALE values located anterior to Heschl's Gyrus. Activations in the ALE comprehension network outside the temporal lobes (left IFG, SMA/pre-SMA, and Precentral Gyrus) were driven by the use of sentences instead of words, the scanning paradigm, or the type of non-speech baseline.


Assuntos
Encéfalo/fisiologia , Compreensão/fisiologia , Percepção da Fala/fisiologia , Mapeamento Encefálico , Humanos , Funções Verossimilhança , Imageamento por Ressonância Magnética , Tomografia por Emissão de Pósitrons
13.
Hum Brain Mapp ; 33(2): 360-72, 2012 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-21391272

RESUMO

A repetition-suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation-speaker and accent-during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a novel accent of Dutch. Each speaker produced sentences in both accents. Participants listened to two sentences presented in quick succession while their haemodynamic responses were recorded in an MR scanner. The first sentence was spoken in Standard Dutch; the second was spoken by the same or a different speaker and produced in Standard Dutch or in the artificial accent. This design made it possible to identify neural responses to a switch in speaker and accent independently. A switch in accent was associated with activations in predominantly left-lateralized areas including posterior temporal regions, including superior temporal gyrus, planum temporale (PT), and supramarginal gyrus, as well as in frontal regions, including left pars opercularis of the inferior frontal gyrus (IFG). A switch in speaker recruited a predominantly right-lateralized network, including middle frontal gyrus and prenuneus. It is concluded that posterior temporal areas, including PT, and frontal areas, including IFG, are involved in processing accent variation in spoken sentence comprehension.


Assuntos
Idioma , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Lobo Frontal/anatomia & histologia , Lobo Frontal/fisiologia , Humanos , Imageamento por Ressonância Magnética/métodos , Lobo Temporal/anatomia & histologia
14.
Neuropsychologia ; 166: 108135, 2022 02 10.
Artigo em Inglês | MEDLINE | ID: mdl-34958833

RESUMO

Motor areas for speech production activate during speech perception. Such activation may assist speech perception in challenging listening conditions. It is not known how ageing affects the recruitment of articulatory motor cortex during active speech perception. This study aimed to determine the effect of ageing on recruitment of speech motor cortex during speech perception. Single-pulse Transcranial Magnetic Stimulation (TMS) was applied to the lip area of left primary motor cortex (M1) to elicit lip Motor Evoked Potentials (MEPs). The M1 hand area was tested as a control site. TMS was applied whilst participants perceived syllables presented with noise (-10, 0, +10 dB SNRs) and without noise (clear). Participants detected and counted syllables throughout MEP recording. Twenty younger adult subjects (aged 18-25) and twenty older adult subjects (aged 65-78) participated in this study. Results indicated a significant interaction between age and noise condition in the syllable task. Specifically, older adults significantly misidentified syllables in the 0 dB SNR condition, and missed the syllables in the -10 dB SNR condition, relative to the clear condition. There were no differences between conditions for younger adults. There was a significant main effect of noise level on lip MEPs. Lip MEPs were unexpectedly inhibited in the 0 dB SNR condition relative to clear condition. There was no interaction between age group and noise condition. There was no main effect of noise or age group on control hand MEPs. These data suggest that speech-induced facilitation in articulatory motor cortex is abolished when performing a challenging secondary task, irrespective of age.


Assuntos
Córtex Motor , Percepção da Fala , Adolescente , Adulto , Idoso , Envelhecimento , Potencial Evocado Motor/fisiologia , Humanos , Córtex Motor/fisiologia , Fala/fisiologia , Percepção da Fala/fisiologia , Estimulação Magnética Transcraniana , Adulto Jovem
15.
J Speech Lang Hear Res ; 64(7): 2513-2528, 2021 07 16.
Artigo em Inglês | MEDLINE | ID: mdl-34161748

RESUMO

Purpose This study first aimed to establish whether viewing specific parts of the speaker's face (eyes or mouth), compared to viewing the whole face, affected adaptation to distorted noise-vocoded sentences. Second, this study also aimed to replicate results on processing of distorted speech from lab-based experiments in an online setup. Method We monitored recognition accuracy online while participants were listening to noise-vocoded sentences. We first established if participants were able to perceive and adapt to audiovisual four-band noise-vocoded sentences when the entire moving face was visible (AV Full). Four further groups were then tested: a group in which participants viewed the moving lower part of the speaker's face (AV Mouth), a group in which participants only see the moving upper part of the face (AV Eyes), a group in which participants could not see the moving lower or upper face (AV Blocked), and a group in which participants saw an image of a still face (AV Still). Results Participants repeated around 40% of the key words correctly and adapted during the experiment, but only when the moving mouth was visible. In contrast, performance was at floor level, and no adaptation took place, in conditions when the moving mouth was occluded. Conclusions The results show the importance of being able to observe relevant visual speech information from the speaker's mouth region, but not the eyes/upper face region, when listening and adapting to distorted sentences online. Second, the results also demonstrated that it is feasible to run speech perception and adaptation studies online, but that not all findings reported for lab studies replicate. Supplemental Material https://doi.org/10.23641/asha.14810523.


Assuntos
Percepção da Fala , Fala , Percepção Auditiva , Sinais (Psicologia) , Humanos , Ruído , Percepção Visual
16.
J Speech Lang Hear Res ; 64(9): 3432-3445, 2021 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-34463528

RESUMO

Purpose Visual cues from a speaker's face may benefit perceptual adaptation to degraded speech, but current evidence is limited. We aimed to replicate results from previous studies to establish the extent to which visual speech cues can lead to greater adaptation over time, extending existing results to a real-time adaptation paradigm (i.e., without a separate training period). A second aim was to investigate whether eye gaze patterns toward the speaker's mouth were related to better perception, hypothesizing that listeners who looked more at the speaker's mouth would show greater adaptation. Method A group of listeners (n = 30) was presented with 90 noise-vocoded sentences in audiovisual format, whereas a control group (n = 29) was presented with the audio signal only. Recognition accuracy was measured throughout and eye tracking was used to measure fixations toward the speaker's eyes and mouth in the audiovisual group. Results Previous studies were partially replicated: The audiovisual group had better recognition throughout and adapted slightly more rapidly, but both groups showed an equal amount of improvement overall. Longer fixations on the speaker's mouth in the audiovisual group were related to better overall accuracy. An exploratory analysis further demonstrated that the duration of fixations to the speaker's mouth decreased over time. Conclusions The results suggest that visual cues may not benefit adaptation to degraded speech as much as previously thought. Longer fixations on a speaker's mouth may play a role in successfully decoding visual speech cues; however, this will need to be confirmed in future research to fully understand how patterns of eye gaze are related to audiovisual speech recognition. All materials, data, and code are available at https://osf.io/2wqkf/.


Assuntos
Percepção da Fala , Fala , Fixação Ocular , Humanos , Boca , Percepção Visual
17.
Neuroimage ; 49(1): 1124-32, 2010 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-19632341

RESUMO

Listeners show remarkable flexibility in processing variation in speech signal. One striking example is the ease with which they adapt to novel speech distortions such as listening to someone with a foreign accent. Behavioural studies suggest that significant improvements in comprehension occur rapidly--often within 10-20 sentences. In the present experiment, we investigate the neural changes underlying on-line adaptation to distorted speech using time-compressed speech. Listeners performed a sentence verification task on normal-speed and time-compressed sentences while their neural responses were recorded using fMRI. The results showed that rapid learning of the time-compressed speech occurred during presentation of the first block of 16 sentences and was associated with increased activation in left and right auditory association cortices and in left ventral premotor cortex. These findings suggest that the ability to adapt to a distorted speech signal may, in part, rely on mapping novel acoustic patterns onto existing articulatory motor plans, consistent with the idea that speech perception involves integrating multi-modal information including auditory and motoric cues.


Assuntos
Adaptação Psicológica/fisiologia , Plasticidade Neuronal/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adolescente , Adulto , Córtex Auditivo/fisiologia , Córtex Cerebral/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Idioma , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Córtex Motor/fisiologia , Oxigênio/sangue , Tempo de Reação/fisiologia , Adulto Jovem
18.
Psychol Sci ; 21(12): 1903-9, 2010 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21135202

RESUMO

Humans imitate each other during social interaction. This imitative behavior streamlines social interaction and aids in learning to replicate actions. However, the effect of imitation on action comprehension is unclear. This study investigated whether vocal imitation of an unfamiliar accent improved spoken-language comprehension. Following a pretraining accent comprehension test, participants were assigned to one of six groups. The baseline group received no training, but participants in the other five groups listened to accented sentences, listened to and repeated accented sentences in their own accent, listened to and transcribed accented sentences, listened to and imitated accented sentences, or listened to and imitated accented sentences without being able to hear their own vocalizations. Posttraining measures showed that accent comprehension was most improved for participants who imitated the speaker's accent. These results show that imitation may aid in streamlining interaction by improving spoken-language comprehension under adverse listening conditions.


Assuntos
Compreensão , Comportamento Imitativo , Idioma , Adolescente , Adulto , Feminino , Humanos , Testes de Linguagem , Masculino , Fala , Percepção da Fala , Adulto Jovem
19.
J Exp Psychol Hum Percept Perform ; 35(2): 520-9, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19331505

RESUMO

This study aimed to determine the relative processing cost associated with comprehension of an unfamiliar native accent under adverse listening conditions. Two sentence verification experiments were conducted in which listeners heard sentences at various signal-to-noise ratios. In Experiment 1, these sentences were spoken in a familiar or an unfamiliar native accent or in two familiar native accents. In Experiment 2, they were spoken in a familiar or unfamiliar native accent or in a nonnative accent. The results indicated that the differences between the native accents influenced the speed of language processing under adverse listening conditions and that this processing speed was modulated by the relative familiarity of the listener with the native accent. Furthermore, the results showed that the processing cost associated with the nonnative accent was larger than for the unfamiliar native accent.


Assuntos
Compreensão , Área de Dependência-Independência , Fonética , Reconhecimento Psicológico , Percepção da Fala , Adulto , Análise de Variância , Feminino , Humanos , Masculino , Ruído , Mascaramento Perceptivo , Valores de Referência , Testes de Discriminação da Fala , Inteligibilidade da Fala , Adulto Jovem
20.
J Acoust Soc Am ; 126(5): 2649-59, 2009 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-19894842

RESUMO

Speakers vary their speech rate considerably during a conversation, and listeners are able to quickly adapt to these variations in speech rate. Adaptation to fast speech rates is usually measured using artificially time-compressed speech. This study examined adaptation to two types of fast speech: artificially time-compressed speech and natural fast speech. Listeners performed a speeded sentence verification task on three series of sentences: normal-speed sentences, time-compressed sentences, and natural fast sentences. Listeners were divided into two groups to evaluate the possibility of transfer of learning between the time-compressed and natural fast conditions. The first group verified the natural fast before the time-compressed sentences, while the second verified the time-compressed before the natural fast sentences. The results showed transfer of learning when the time-compressed sentences preceded the natural fast sentences, but not when natural fast sentences preceded the time-compressed sentences. The results are discussed in the framework of theories on perceptual learning. Second, listeners show adaptation to the natural fast sentences, but performance for this type of fast speech does not improve to the level of time-compressed sentences.


Assuntos
Aprendizagem/fisiologia , Psicoacústica , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Fala , Adaptação Fisiológica/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Medida da Produção da Fala , Fatores de Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA