Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
J Neurosci ; 43(27): 4984-4996, 2023 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-37197979

RESUMO

It has been postulated that the brain is organized by "metamodal," sensory-independent cortical modules capable of performing tasks (e.g., word recognition) in both "standard" and novel sensory modalities. Still, this theory has primarily been tested in sensory-deprived individuals, with mixed evidence in neurotypical subjects, thereby limiting its support as a general principle of brain organization. Critically, current theories of metamodal processing do not specify requirements for successful metamodal processing at the level of neural representations. Specification at this level may be particularly important in neurotypical individuals, where novel sensory modalities must interface with existing representations for the standard sense. Here we hypothesized that effective metamodal engagement of a cortical area requires congruence between stimulus representations in the standard and novel sensory modalities in that region. To test this, we first used fMRI to identify bilateral auditory speech representations. We then trained 20 human participants (12 female) to recognize vibrotactile versions of auditory words using one of two auditory-to-vibrotactile algorithms. The vocoded algorithm attempted to match the encoding scheme of auditory speech while the token-based algorithm did not. Crucially, using fMRI, we found that only in the vocoded group did trained-vibrotactile stimuli recruit speech representations in the superior temporal gyrus and lead to increased coupling between them and somatosensory areas. Our results advance our understanding of brain organization by providing new insight into unlocking the metamodal potential of the brain, thereby benefitting the design of novel sensory substitution devices that aim to tap into existing processing streams in the brain.SIGNIFICANCE STATEMENT It has been proposed that the brain is organized by "metamodal," sensory-independent modules specialized for performing certain tasks. This idea has inspired therapeutic applications, such as sensory substitution devices, for example, enabling blind individuals "to see" by transforming visual input into soundscapes. Yet, other studies have failed to demonstrate metamodal engagement. Here, we tested the hypothesis that metamodal engagement in neurotypical individuals requires matching the encoding schemes between stimuli from the novel and standard sensory modalities. We trained two groups of subjects to recognize words generated by one of two auditory-to-vibrotactile transformations. Critically, only vibrotactile stimuli that were matched to the neural encoding of auditory speech engaged auditory speech areas after training. This suggests that matching encoding schemes is critical to unlocking the brain's metamodal potential.


Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Feminino , Fala , Percepção Auditiva , Encéfalo , Lobo Temporal , Imageamento por Ressonância Magnética/métodos , Estimulação Acústica/métodos
2.
Ear Hear ; 42(3): 673-690, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33928926

RESUMO

OBJECTIVES: The ability to recognize words in connected speech under noisy listening conditions is critical to everyday communication. Many processing levels contribute to the individual listener's ability to recognize words correctly against background speech, and there is clinical need for measures of individual differences at different levels. Typical listening tests of speech recognition in noise require a list of items to obtain a single threshold score. Diverse abilities measures could be obtained through mining various open-set recognition errors during multi-item tests. This study sought to demonstrate that an error mining approach using open-set responses from a clinical sentence-in-babble-noise test can be used to characterize abilities beyond signal-to-noise ratio (SNR) threshold. A stimulus-response phoneme-to-phoneme sequence alignment software system was used to achieve automatic, accurate quantitative error scores. The method was applied to a database of responses from normal-hearing (NH) adults. Relationships between two types of response errors and words correct scores were evaluated through use of mixed models regression. DESIGN: Two hundred thirty-three NH adults completed three lists of the Quick Speech in Noise test. Their individual open-set speech recognition responses were automatically phonemically transcribed and submitted to a phoneme-to-phoneme stimulus-response sequence alignment system. The computed alignments were mined for a measure of acoustic phonetic perception, a measure of response text that could not be attributed to the stimulus, and a count of words correct. The mined data were statistically analyzed to determine whether the response errors were significant factors beyond stimulus SNR in accounting for the number of words correct per response from each participant. This study addressed two hypotheses: (1) Individuals whose perceptual errors are less severe recognize more words correctly under difficult listening conditions due to babble masking and (2) Listeners who are better able to exclude incorrect speech information such as from background babble and filling in recognize more stimulus words correctly. RESULTS: Statistical analyses showed that acoustic phonetic accuracy and exclusion of babble background were significant factors, beyond the stimulus sentence SNR, in accounting for the number of words a participant recognized. There was also evidence that poorer acoustic phonetic accuracy could occur along with higher words correct scores. This paradoxical result came from a subset of listeners who had also performed subjective accuracy judgments. Their results suggested that they recognized more words while also misallocating acoustic cues from the background into the stimulus, without realizing their errors. Because the Quick Speech in Noise test stimuli are locked to their own babble sample, misallocations of whole words from babble into the responses could be investigated in detail. The high rate of common misallocation errors for some sentences supported the view that the functional stimulus was the combination of the target sentence and its babble. CONCLUSIONS: Individual differences among NH listeners arise both in terms of words accurately identified and errors committed during open-set recognition of sentences in babble maskers. Error mining to characterize individual listeners can be done automatically at the levels of acoustic phonetic perception and the misallocation of background babble words into open-set responses. Error mining can increase test information and the efficiency and accuracy of characterizing individual listeners.


Assuntos
Percepção da Fala , Fala , Acústica , Adulto , Audição , Humanos , Individualidade , Fonética
3.
Brain Sci ; 13(7)2023 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-37508940

RESUMO

Traditionally, speech perception training paradigms have not adequately taken into account the possibility that there may be modality-specific requirements for perceptual learning with auditory-only (AO) versus visual-only (VO) speech stimuli. The study reported here investigated the hypothesis that there are modality-specific differences in how prior information is used by normal-hearing participants during vocoded versus VO speech training. Two different experiments, one with vocoded AO speech (Experiment 1) and one with VO, lipread, speech (Experiment 2), investigated the effects of giving different types of prior information to trainees on each trial during training. The training was for four ~20 min sessions, during which participants learned to label novel visual images using novel spoken words. Participants were assigned to different types of prior information during training: Word Group trainees saw a printed version of each training word (e.g., "tethon"), and Consonant Group trainees saw only its consonants (e.g., "t_th_n"). Additional groups received no prior information (i.e., Experiment 1, AO Group; Experiment 2, VO Group) or a spoken version of the stimulus in a different modality from the training stimuli (Experiment 1, Lipread Group; Experiment 2, Vocoder Group). That is, in each experiment, there was a group that received prior information in the modality of the training stimuli from the other experiment. In both experiments, the Word Groups had difficulty retaining the novel words they attempted to learn during training. However, when the training stimuli were vocoded, the Word Group improved their phoneme identification. When the training stimuli were visual speech, the Consonant Group improved their phoneme identification and their open-set sentence lipreading. The results are considered in light of theoretical accounts of perceptual learning in relationship to perceptual modality.

4.
Am J Audiol ; 31(1): 57-77, 2022 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-34965362

RESUMO

PURPOSE: This study investigated the effects of external feedback on perceptual learning of visual speech during lipreading training with sentence stimuli. The goal was to improve visual-only (VO) speech recognition and increase accuracy of audiovisual (AV) speech recognition in noise. The rationale was that spoken word recognition depends on the accuracy of sublexical (phonemic/phonetic) speech perception; effective feedback during training must support sublexical perceptual learning. METHOD: Normal-hearing (NH) adults were assigned to one of three types of feedback: Sentence feedback was the entire sentence printed after responding to the stimulus. Word feedback was the correct response words and perceptually near but incorrect response words. Consonant feedback was correct response words and consonants in incorrect but perceptually near response words. Six training sessions were given. Pre- and posttraining testing included an untrained control group. Test stimuli were disyllable nonsense words for forced-choice consonant identification, and isolated words and sentences for open-set identification. Words and sentences were VO, AV, and audio-only (AO) with the audio in speech-shaped noise. RESULTS: Lipreading accuracy increased during training. Pre- and posttraining tests of consonant identification showed no improvement beyond test-retest increases obtained by untrained controls. Isolated word recognition with a talker not seen during training showed that the control group improved more than the sentence group. Tests of untrained sentences showed that the consonant group significantly improved in all of the stimulus conditions (VO, AO, and AV). Its mean words correct scores increased by 9.2 percentage points for VO, 3.4 percentage points for AO, and 9.8 percentage points for AV stimuli. CONCLUSIONS: Consonant feedback during training with sentences stimuli significantly increased perceptual learning. The training generalized to untrained VO, AO, and AV sentence stimuli. Lipreading training has potential to significantly improve adults' face-to-face communication in noisy settings in which the talker can be seen.


Assuntos
Leitura Labial , Percepção da Fala , Adulto , Retroalimentação , Humanos , Ruído , Fala , Percepção da Fala/fisiologia
5.
Am J Audiol ; 31(2): 453-469, 2022 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-35316072

RESUMO

PURPOSE: The goal of this review article is to reinvigorate interest in lipreading and lipreading training for adults with acquired hearing loss. Most adults benefit from being able to see the talker when speech is degraded; however, the effect size is related to their lipreading ability, which is typically poor in adults who have experienced normal hearing through most of their lives. Lipreading training has been viewed as a possible avenue for rehabilitation of adults with an acquired hearing loss, but most training approaches have not been particularly successful. Here, we describe lipreading and theoretically motivated approaches to its training, as well as examples of successful training paradigms. We discuss some extensions to auditory-only (AO) and audiovisual (AV) speech recognition. METHOD: Visual speech perception and word recognition are described. Traditional and contemporary views of training and perceptual learning are outlined. We focus on the roles of external and internal feedback and the training task in perceptual learning, and we describe results of lipreading training experiments. RESULTS: Lipreading is commonly characterized as limited to viseme perception. However, evidence demonstrates subvisemic perception of visual phonetic information. Lipreading words also relies on lexical constraints, not unlike auditory spoken word recognition. Lipreading has been shown to be difficult to improve through training, but under specific feedback and task conditions, training can be successful, and learning can generalize to untrained materials, including AV sentence stimuli in noise. The results on lipreading have implications for AO and AV training and for use of acoustically processed speech in face-to-face communication. CONCLUSION: Given its importance for speech recognition with a hearing loss, we suggest that the research and clinical communities integrate lipreading in their efforts to improve speech recognition in adults with acquired hearing loss.


Assuntos
Surdez , Perda Auditiva , Percepção da Fala , Adulto , Humanos , Leitura Labial , Fala
6.
Neuroimage ; 52(4): 1477-86, 2010 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-20561996

RESUMO

Neuromagnetic evoked fields were recorded to compare the adaptation of the primary somatosensory cortex (SI) response to tactile stimuli delivered to the glabrous skin at the fingertips of the first three digits (condition 1) and between midline upper and lower lips (condition 2). The stimulation paradigm allowed to characterize the response adaptation in the presence of functional integration of tactile stimuli from adjacent skin areas in each condition. At each stimulation site, cutaneous stimuli (50 ms duration) were delivered in three runs, using trains of 6 pulses with regular stimulus onset asynchrony (SOA). The pulses were separated by SOAs of 500 ms, 250 ms or 125 ms in each run, respectively, while the inter-train interval was fixed (5s) across runs. The evoked activity in SI (contralateral to the stimulated hand, and bilaterally for lips stimulation) was characterized from the best-fit dipoles of the response component peaking around 70 ms for the hand stimulation, and 8 ms earlier (on average) for the lips stimulation. The SOA-dependent long-term adaptation effects were assessed from the change in the amplitude of the responses to the first stimulus in each train. The short-term adaptation was characterized by the lifetime of an exponentially saturating model function fitted to the set of suppression ratios of the second relative to the first SI response in each train. Our results indicate: 1) the presence of a rate-dependent long-term adaptation effect induced only by the tactile stimulation of the digits; and 2) shorter recovery lifetimes for the digits compared with the lips stimulation.


Assuntos
Potenciais Somatossensoriais Evocados/fisiologia , Dedos/fisiologia , Lábio/fisiologia , Magnetoencefalografia , Fenômenos Fisiológicos da Pele , Córtex Somatossensorial/fisiologia , Tato/fisiologia , Adaptação Fisiológica , Adulto , Humanos , Lábio/inervação , Masculino , Estimulação Física/métodos , Pele/inervação
7.
J Am Acad Audiol ; 21(3): 163-8, 2010 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20211120

RESUMO

BACKGROUND: The visual speech signal can provide sufficient information to support successful communication. However, individual differences in the ability to appreciate that information are large, and relatively little is known about their sources. PURPOSE: Here a body of research is reviewed regarding the development of a theoretical framework in which to study speechreading and individual differences in that ability. Based on the hypothesis that visual speech is processed via the same perceptual-cognitive machinery as auditory speech, a theoretical framework was developed by adapting a theoretical framework originally developed for auditory spoken word recognition. CONCLUSION: The evidence to date is consistent with the conclusion that visual spoken word recognition is achieved via a process similar to auditory word recognition provided differences in perceptual similarity are taken into account. Words perceptually similar to many other words and that occur infrequently in the input stream are at a distinct disadvantage within this process. The results to date are also consistent with the conclusion that deaf individuals, regardless of speechreading ability, recognize spoken words via a process similar to individuals with hearing.


Assuntos
Aptidão , Surdez/psicologia , Surdez/reabilitação , Leitura Labial , Surdez/etiologia , Humanos , Reconhecimento Fisiológico de Modelo/fisiologia , Percepção Visual/fisiologia
8.
Brain Topogr ; 21(3-4): 207-15, 2009 May.
Artigo em Inglês | MEDLINE | ID: mdl-19404730

RESUMO

The functional organization of cortical speech processing is thought to be hierarchical, increasing in complexity and proceeding from primary sensory areas centrifugally. The current study used the mismatch negativity (MMN) obtained with electrophysiology (EEG) to investigate the early latency period of visual speech processing under both visual-only (VO) and audiovisual (AV) conditions. Current density reconstruction (CDR) methods were used to model the cortical MMN generator locations. MMNs were obtained with VO and AV speech stimuli at early latencies (approximately 82-87 ms peak in time waveforms relative to the acoustic onset) and in regions of the right lateral temporal and parietal cortices. Latencies were consistent with bottom-up processing of the visible stimuli. We suggest that a visual pathway extracts phonetic cues from visible speech, and that previously reported effects of AV speech in classical early auditory areas, given later reported latencies, could be attributable to modulatory feedback from visual phonetic processing.


Assuntos
Córtex Cerebral/fisiologia , Tempo de Reação/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/anatomia & histologia , Córtex Auditivo/fisiologia , Vias Auditivas/anatomia & histologia , Vias Auditivas/fisiologia , Mapeamento Encefálico , Eletroencefalografia , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Lobo Parietal/anatomia & histologia , Lobo Parietal/fisiologia , Estimulação Luminosa , Lobo Temporal/anatomia & histologia , Lobo Temporal/fisiologia , Fatores de Tempo , Vias Visuais/anatomia & histologia , Vias Visuais/fisiologia , Adulto Jovem
9.
Scand J Psychol ; 50(5): 419-25, 2009 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-19778389

RESUMO

Spoken word recognition is thought to be achieved via competition in the mental lexicon between perceptually similar word forms. A review of the development and initial behavioral validations of computational models of visual spoken word recognition is presented, followed by a report of new empirical evidence. Specifically, a replication and extension of Mattys, Bernstein & Auer's (2002) study was conducted with 20 deaf participants who varied widely in speechreading ability. Participants visually identified isolated spoken words. Accuracy of visual spoken word recognition was influenced by the number of visually similar words in the lexicon and by the frequency of occurrence of the stimulus words. The results are consistent with the common view held within auditory word recognition that this task is accomplished via a process of activation and competition in which frequently occurring units are favored. Finally, future directions for visual spoken word recognition are discussed.


Assuntos
Surdez/fisiopatologia , Leitura Labial , Reconhecimento Visual de Modelos/fisiologia , Fala/fisiologia , Adolescente , Adulto , Análise de Variância , Humanos , Estimulação Luminosa , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Vocabulário
10.
J Speech Lang Hear Res ; 51(3): 750-8, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18506048

RESUMO

PURPOSE: Sensitivity of subjective estimates of age of acquisition (AOA) and acquisition channel (AC; printed, spoken, signed) to differences in word exposure within and between populations that differ dramatically in perceptual experience was examined. METHODS: Fifty participants with early-onset deafness and 50 participants with normal hearing rated 175 words in terms of subjective AOA and AC. Additional data were collected using a standardized test of reading and vocabulary. RESULTS: Participants with early-onset deafness rated words as learned later (M = 10 years) than did participants with normal hearing (M = 8.5 years), F(1, 99) = 28.59, p < .01. Group-averaged item ratings of AOA were highly correlated across the groups (r = .971) and with normative order of acquisition (deaf: r = .950, hearing: r = .946). The groups differed in their ratings of AC (hearing: printed = 30%, spoken = 70%, signed = 0%; deaf: printed = 45%, spoken = 38%, signed = 17%). CONCLUSIONS: Subjective AOA and AC measures are sensitive to between- and within-group differences in word experience. The results demonstrate that these subjective measures can be applied as proxies for direct measures of lexical development in studies of lexical knowledge in adults with prelingual onset deafness.


Assuntos
Linguagem Infantil , Formação de Conceito , Semântica , Aprendizagem Verbal , Vocabulário , Adolescente , Adulto , Fatores Etários , Criança , Feminino , Humanos , Testes de Linguagem , Masculino , Pessoa de Meia-Idade , Inquéritos e Questionários
11.
Neuroreport ; 18(7): 645-8, 2007 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-17426591

RESUMO

Neuroplastic changes in auditory cortex as a result of lifelong perceptual experience were investigated. Adults with early-onset deafness and long-term hearing aid experience were hypothesized to have undergone auditory cortex plasticity due to somatosensory stimulation. Vibrations were presented on the hand of deaf and normal-hearing participants during functional MRI. Vibration stimuli were derived from speech or were a fixed frequency. Higher, more widespread activity was observed within auditory cortical regions of the deaf participants for both stimulus types. Life-long somatosensory stimulation due to hearing aid use could explain the greater activity observed with deaf participants.


Assuntos
Córtex Auditivo/fisiologia , Mapeamento Encefálico , Surdez , Auxiliares de Audição , Plasticidade Neuronal/fisiologia , Vibração , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino
12.
J Speech Lang Hear Res ; 50(5): 1157-65, 2007 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-17905902

RESUMO

PURPOSE: L. E. Bernstein, M. E. Demorest, and P. E. Tucker (2000) demonstrated enhanced speechreading accuracy in participants with early-onset hearing loss compared with hearing participants. Here, the authors test the generalization of Bernstein et al.'s (2000) result by testing 2 new large samples of participants. The authors also investigated correlates of speechreading ability within the early-onset hearing loss group and gender differences in speechreading ability within both participant groups. METHOD: One hundred twelve individuals with early-onset hearing loss and 220 individuals with normal hearing identified 30 prerecorded sentences presented 1 at a time from visible speech information alone. RESULTS: The speechreading accuracy of the participants with early-onset hearing loss (M=43.55% words correct; SD=17.48) significantly exceeded that of the participants with normal hearing (M=18.57% words correct; SD=13.18), t(330)=14.576, p<.01. Within the early-onset hearing loss participants, speechreading ability was correlated with several subjective measures of spoken communication. Effects of gender were not reliably observed. CONCLUSION: The present results are consistent with the results of Bernstein et al. (2000). The need to rely on visual speech throughout life, and particularly for the acquisition of spoken language by individuals with early-onset hearing loss, can lead to enhanced speechreading ability.


Assuntos
Perda Auditiva/fisiopatologia , Leitura Labial , Adulto , Feminino , Humanos , Individualidade , Masculino , Fatores Sexuais
13.
Otol Neurotol ; 26(4): 649-54, 2005 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-16015162

RESUMO

OBJECTIVE: To determine whether congenitally deafened adults achieve improved speech perception when auditory and visual speech information is available after cochlear implantation. STUDY DESIGN: Repeated-measures single subject analysis of speech perception in visual-alone, auditory-alone, and audiovisual conditions. SETTING: Neurotologic private practice and research institute. SUBJECTS: Eight subjects with profound congenital bilateral hearing loss who underwent cochlear implantation as adults (aged 18-55 years) between 1995 and 2002 and had at least 1 year of experience with the implant. MAIN OUTCOME MEASURES: Auditory, visual, and audiovisual speech perception. RESULTS: The median for speech perception scores were as follows: visual-alone, 25.9% (range, 12.7-58.1%); auditory-alone, 5.2% (range, 0-49.4%); and audiovisual, 50.7% (range, 16.5-90.8%). Seven of eight subjects did as well or better in the audiovisual condition than in either auditory-alone or visual-alone conditions alone. Three subjects had audiovisual scores greater than what would be expected from a simple additive effect of the information from the auditory-alone and visual-alone conditions alone, suggesting a superadditive effect of the combination of auditory-alone and visual-alone information. Three subjects had a simple additive effect of speech perception in the audiovisual condition. CONCLUSION: Some congenitally deafened subjects who undergo implantation as adults have significant gains in speech perception when auditory information from a cochlear implant and visual information by lipreading is available. This study shows that some congenitally deafened adults are able to integrate auditory information provided by the cochlear implant (despite the lack of auditory speech experience before implantation) with visual speech information.


Assuntos
Implantes Cocleares , Surdez/congênito , Surdez/cirurgia , Percepção da Fala , Adulto , Surdez/fisiopatologia , Audição , Humanos , Leitura Labial , Pessoa de Meia-Idade , Resultado do Tratamento
14.
Neuroreport ; 13(3): 311-5, 2002 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-11930129

RESUMO

Speech perception is conventionally thought to be an auditory function, but humans often use their eyes to perceive speech. We investigated whether visual speech perception depends on processing by the primary auditory cortex in hearing adults. In a functional magnetic resonance imaging experiment, a pulse-tone was presented contrasted with gradient noise. During the same session, a silent video of a talker saying isolated words was presented contrasted with a still face. Visual speech activated the superior temporal gyrus anterior, posterior, and lateral to the primary auditory cortex, but not the region of the primary auditory cortex. These results suggest that visual speech perception is not critically dependent on the region of primary auditory cortex.


Assuntos
Córtex Auditivo/fisiologia , Leitura Labial , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Rede Nervosa/fisiologia
15.
Psychon Bull Rev ; 9(2): 341-7, 2002 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-12120798

RESUMO

The neighborhood activation model (NAM; P. A. Luce & Pisoni, 1998) of spoken word recognition was applied to the problem of predicting accuracy of visual spoken word identification. One hundred fifty-three spoken consonant-vowel-consonant words were identified by a group of 12 college-educated adults with normal hearing and a group of 12 college-educated deaf adults. In both groups, item identification accuracy was correlated with the computed NAM output values. Analysis of subsets of the stimulus set demonstrated that when stimulus intelligibility was controlled, words with fewer neighbors were easier to identify than words with many neighbors. However, when neighborhood density was controlled, variation in segmental intelligibility was minimally related to identification accuracy. The present study provides evidence of a common spoken word recognition system for both auditory and visual speech that retains sensitivity to the phonetic properties of the input.


Assuntos
Surdez/psicologia , Leitura Labial , Fonética , Semântica , Adolescente , Adulto , Atenção , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala
16.
Front Hum Neurosci ; 8: 829, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25400566

RESUMO

In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

17.
Front Psychol ; 5: 934, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25206344

RESUMO

Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We point out that while AV training could be an impediment to immediate unisensory perceptual learning in cochlear implant patients, it was also associated with higher scores during training.

18.
Front Hum Neurosci ; 7: 371, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23882205

RESUMO

The visual mismatch negativity (vMMN), deriving from the brain's response to stimulus deviance, is thought to be generated by the cortex that represents the stimulus. The vMMN response to visual speech stimuli was used in a study of the lateralization of visual speech processing. Previous research suggested that the right posterior temporal cortex has specialization for processing simple non-speech face gestures, and the left posterior temporal cortex has specialization for processing visual speech gestures. Here, visual speech consonant-vowel (CV) stimuli with controlled perceptual dissimilarities were presented in an electroencephalography (EEG) vMMN paradigm. The vMMNs were obtained using the comparison of event-related potentials (ERPs) for separate CVs in their roles as deviant vs. their roles as standard. Four separate vMMN contrasts were tested, two with the perceptually far deviants (i.e., "zha" or "fa") and two with the near deviants (i.e., "zha" or "ta"). Only far deviants evoked the vMMN response over the left posterior temporal cortex. All four deviants evoked vMMNs over the right posterior temporal cortex. The results are interpreted as evidence that the left posterior temporal cortex represents speech contrasts that are perceived as different consonants, and the right posterior temporal cortex represents face gestures that may not be perceived as different CVs.

19.
Front Neurosci ; 7: 34, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23515520

RESUMO

Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

20.
Brain Res ; 1348: 63-70, 2010 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-20550944

RESUMO

A new pneumatic tactile stimulator, called the TAC-Cell, was developed in our laboratory to non-invasively deliver patterned cutaneous stimulation to the face and hand in order to study the neuromagnetic response adaptation patterns within the primary somatosensory cortex (S1) in young adult humans. Individual TAC-Cells were positioned on the glabrous surface of the right hand, and midline of the upper and lower lip vermilion. A 151-channel magnetoencephalography (MEG) scanner was used to record the cortical response to a novel tactile stimulus which consisted of a repeating 6-pulse train delivered at three different frequencies through the active membrane surface of the TAC-Cell. The evoked activity in S1 (contralateral for hand stimulation, and bilateral for lip stimulation) was characterized from the best-fit dipoles of the earliest prominent response component. The S1 responses manifested significant modulation and adaptation as a function of the frequency of the punctate pneumatic stimulus trains and stimulus site (glabrous lip versus glabrous hand).


Assuntos
Adaptação Fisiológica/fisiologia , Mãos/inervação , Lábio/inervação , Estimulação Física/instrumentação , Córtex Somatossensorial/fisiologia , Tato/fisiologia , Adulto , Análise de Variância , Potenciais Somatossensoriais Evocados/fisiologia , Feminino , Lateralidade Funcional/fisiologia , Humanos , Imageamento por Ressonância Magnética/métodos , Magnetoencefalografia , Estimulação Física/métodos , Tempo de Reação/fisiologia , Fatores de Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa