Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
J Cogn Neurosci ; 32(11): 2145-2158, 2020 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-32662723

RESUMO

When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio-video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.


Assuntos
Fonética , Percepção da Fala , Percepção Auditiva/fisiologia , Humanos , Aprendizagem , Leitura Labial , Percepção da Fala/fisiologia
2.
Neuroimage ; 179: 326-336, 2018 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-29936308

RESUMO

Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower ß bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and ß power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing.


Assuntos
Córtex Cerebral/fisiologia , Retroalimentação Sensorial/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Magnetoencefalografia , Masculino , Percepção da Altura Sonora/fisiologia , Adulto Jovem
3.
J Acoust Soc Am ; 142(4): 2007, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-29092613

RESUMO

An important part of understanding speech motor control consists of capturing the interaction between speech production and speech perception. This study tests a prediction of theoretical frameworks that have tried to account for these interactions: If speech production targets are specified in auditory terms, individuals with better auditory acuity should have more precise speech targets, evidenced by decreased within-phoneme variability and increased between-phoneme distance. A study was carried out consisting of perception and production tasks in counterbalanced order. Auditory acuity was assessed using an adaptive speech discrimination task, while production variability was determined using a pseudo-word reading task. Analyses of the production data were carried out to quantify average within-phoneme variability, as well as average between-phoneme contrasts. Results show that individuals not only vary in their production and perceptual abilities, but that better discriminators have more distinctive vowel production targets-that is, targets with less within-phoneme variability and greater between-phoneme distances-confirming the initial hypothesis. This association between speech production and perception did not depend on local phoneme density in vowel space. This study suggests that better auditory acuity leads to more precise speech production targets, which may be a consequence of auditory feedback affecting speech production over time.


Assuntos
Fonética , Percepção da Fala , Fala/fisiologia , Feminino , Humanos , Masculino , Atividade Motora , Análise de Regressão , Adulto Jovem
4.
J Neurosci ; 33(26): 10688-97, 2013 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-23804092

RESUMO

Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal envelope of the acoustic signal is preserved, while spectral information is highly degraded). Clear-speech trials were included as baseline. An additional fMRI experiment on amplitude modulation rate discrimination quantified the convergence of neural mechanisms that subserve coping with challenging listening conditions for speech and non-speech. First, the degraded speech task revealed an "executive" network (comprising the anterior insula and anterior cingulate cortex), parts of which were also activated in the non-speech discrimination task. Second, trial-by-trial fluctuations in successful comprehension of degraded speech drove hemodynamic signal change in classic "language" areas (bilateral temporal cortices). Third, as listeners perceptually adapted to degraded speech, downregulation in a cortico-striato-thalamo-cortical circuit was observable. The present data highlight differential upregulation and downregulation in auditory-language and executive networks, respectively, with important subcortical contributions when successfully adapting to a challenging listening situation.


Assuntos
Adaptação Psicológica/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Estimulação Acústica , Adulto , Compreensão/fisiologia , Discriminação Psicológica/fisiologia , Feminino , Lateralidade Funcional/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Neostriado/fisiologia , Rede Nervosa/fisiologia , Ruído , Oxigênio/sangue , Desempenho Psicomotor/fisiologia , Percepção da Fala/fisiologia , Medida da Produção da Fala , Tálamo/fisiologia , Adulto Jovem
5.
Nat Rev Neurosci ; 10(4): 295-302, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19277052

RESUMO

The motor theory of speech perception assumes that activation of the motor system is essential in the perception of speech. However, deficits in speech perception and comprehension do not arise from damage that is restricted to the motor cortex, few functional imaging studies reveal activity in the motor cortex during speech perception, and the motor cortex is strongly activated by many different sound categories. Here, we evaluate alternative roles for the motor cortex in spoken communication and suggest a specific role in sensorimotor processing in conversation. We argue that motor cortex activation is essential in joint speech, particularly for the timing of turn taking.


Assuntos
Mapeamento Encefálico , Comunicação , Córtex Motor/fisiologia , Percepção da Fala/fisiologia , Rede Nervosa/fisiologia , Fala/fisiologia
6.
J Cogn Neurosci ; 25(11): 1875-86, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23691984

RESUMO

Historically, the study of human identity perception has focused on faces, but the voice is also central to our expressions and experiences of identity [Belin, P., Fecteau, S., & Bedard, C. Thinking the voice: Neural correlates of voice perception. Trends in Cognitive Sciences, 8, 129-135, 2004]. Our voices are highly flexible and dynamic; talkers speak differently, depending on their health, emotional state, and the social setting, as well as extrinsic factors such as background noise. However, to date, there have been no studies of the neural correlates of identity modulation in speech production. In the current fMRI experiment, we measured the neural activity supporting controlled voice change in adult participants performing spoken impressions. We reveal that deliberate modulation of vocal identity recruits the left anterior insula and inferior frontal gyrus, supporting the planning of novel articulations. Bilateral sites in posterior superior temporal/inferior parietal cortex and a region in right middle/anterior STS showed greater responses during the emulation of specific vocal identities than for impressions of generic accents. Using functional connectivity analyses, we describe roles for these three sites in their interactions with the brain regions supporting speech planning and production. Our findings mark a significant step toward understanding the neural control of vocal identity, with wider implications for the cognitive control of voluntary motor acts.


Assuntos
Comportamento Imitativo/fisiologia , Córtex Pré-Frontal/fisiologia , Fala/fisiologia , Lobo Temporal/fisiologia , Voz/fisiologia , Acústica , Adulto , Análise de Variância , Mapeamento Encefálico , Interpretação Estatística de Dados , Feminino , Lateralidade Funcional/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Rede Nervosa/fisiologia , Psicofisiologia
7.
Proc Natl Acad Sci U S A ; 107(6): 2408-12, 2010 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-20133790

RESUMO

Emotional signals are crucial for sharing important information, with conspecifics, for example, to warn humans of danger. Humans use a range of different cues to communicate to others how they feel, including facial, vocal, and gestural signals. We examined the recognition of nonverbal emotional vocalizations, such as screams and laughs, across two dramatically different cultural groups. Western participants were compared to individuals from remote, culturally isolated Namibian villages. Vocalizations communicating the so-called "basic emotions" (anger, disgust, fear, joy, sadness, and surprise) were bidirectionally recognized. In contrast, a set of additional emotions was only recognized within, but not across, cultural boundaries. Our findings indicate that a number of primarily negative emotions have vocalizations that can be recognized across cultures, while most positive emotions are communicated with culture-specific signals.


Assuntos
Emoções , Expressão Facial , Reconhecimento Psicológico/fisiologia , Voz/fisiologia , Adulto , Comparação Transcultural , Sinais (Psicologia) , Feminino , Humanos , Idioma , Masculino , Desempenho Psicomotor/fisiologia , Tempo de Reação/fisiologia , Percepção Visual/fisiologia
8.
J Neurosci ; 30(21): 7179-86, 2010 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-20505085

RESUMO

This study investigated the neural plasticity associated with perceptual learning of a cochlear implant (CI) simulation. Normal-hearing listeners were trained with vocoded and spectrally shifted speech simulating a CI while cortical responses were measured with functional magnetic resonance imaging (fMRI). A condition in which the vocoded speech was spectrally inverted provided a control for learnability and adaptation. Behavioral measures showed considerable individual variability both in the ability to learn to understand the degraded speech, and in phonological working memory capacity. Neurally, left-lateralized regions in superior temporal sulcus and inferior frontal gyrus (IFG) were sensitive to the learnability of the simulations, but only the activity in prefrontal cortex correlated with interindividual variation in intelligibility scores and phonological working memory. A region in left angular gyrus (AG) showed an activation pattern that reflected learning over the course of the experiment, and covariation of activity in AG and IFG was modulated by the learnability of the stimuli. These results suggest that variation in listeners' ability to adjust to vocoded and spectrally shifted speech is partly reflected in differences in the recruitment of higher-level language processes in prefrontal cortex, and that this variability may further depend on functional links between the left inferior frontal gyrus and angular gyrus. Differences in the engagement of left inferior prefrontal cortex, and its covariation with posterior parietal areas, may thus underlie some of the variation in speech perception skills that have been observed in clinical populations of CI users.


Assuntos
Implantes Cocleares , Lobo Frontal/fisiologia , Individualidade , Aprendizagem/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adaptação Fisiológica/fisiologia , Adulto , Feminino , Lobo Frontal/irrigação sanguínea , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Ruído , Oxigênio/sangue , Valor Preditivo dos Testes , Semântica , Espectrografia do Som , Análise Espectral , Adulto Jovem
9.
J Cogn Neurosci ; 23(4): 961-77, 2011 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20350182

RESUMO

This study investigated links between working memory and speech processing systems. We used delayed pseudoword repetition in fMRI to investigate the neural correlates of sublexical structure in phonological working memory (pWM). We orthogonally varied the number of syllables and consonant clusters in auditory pseudowords and measured the neural responses to these manipulations under conditions of covert rehearsal (Experiment 1). A left-dominant network of temporal and motor cortex showed increased activity for longer items, with motor cortex only showing greater activity concomitant with adding consonant clusters. An individual-differences analysis revealed a significant positive relationship between activity in the angular gyrus and the hippocampus, and accuracy on pseudoword repetition. As models of pWM stipulate that its neural correlates should be activated during both perception and production/rehearsal [Buchsbaum, B. R., & D'Esposito, M. The search for the phonological store: From loop to convolution. Journal of Cognitive Neuroscience, 20, 762-778, 2008; Jacquemot, C., & Scott, S. K. What is the relationship between phonological short-term memory and speech processing? Trends in Cognitive Sciences, 10, 480-486, 2006; Baddeley, A. D., & Hitch, G. Working memory. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 8, pp. 47-89). New York: Academic Press, 1974], we further assessed the effects of the two factors in a separate passive listening experiment (Experiment 2). In this experiment, the effect of the number of syllables was concentrated in posterior-medial regions of the supratemporal plane bilaterally, although there was no evidence of a significant response to added clusters. Taken together, the results identify the planum temporale as a key region in pWM; within this region, representations are likely to take the form of auditory or audiomotor "templates" or "chunks" at the level of the syllable [Papoutsi, M., de Zwart, J. A., Jansma, J. M., Pickering, M. J., Bednar, J. A., & Horwitz, B. From phonemes to articulatory codes: an fMRI study of the role of Broca's area in speech production. Cerebral Cortex, 19, 2156-2165, 2009; Warren, J. E., Wise, R. J. S., & Warren, J. D. Sounds do-able: auditory-motor transformations and the posterior temporal plane. Trends in Neurosciences, 28, 636-643, 2005; Griffiths, T. D., & Warren, J. D. The planum temporale as a computational hub. Trends in Neurosciences, 25, 348-353, 2002], whereas more lateral structures on the STG may deal with phonetic analysis of the auditory input [Hickok, G. The functional neuroanatomy of language. Physics of Life Reviews, 6, 121-143, 2009].


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Memória de Curto Prazo/fisiologia , Fonética , Estimulação Acústica/métodos , Adulto , Análise de Variância , Encéfalo/anatomia & histologia , Encéfalo/irrigação sanguínea , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador/métodos , Linguística , Imageamento por Ressonância Magnética/métodos , Masculino , Oxigênio/sangue , Tempo de Reação/fisiologia , Adulto Jovem
11.
Trends Cogn Sci ; 13(1): 14-9, 2009 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-19070534

RESUMO

Speech perception requires the decoding of complex acoustic patterns. According to most cognitive models of spoken word recognition, this complexity is dealt with before lexical access via a process of abstraction from the acoustic signal to pre-lexical categories. It is currently unclear how these categories are implemented in the auditory cortex. Recent advances in animal neurophysiology and human functional imaging have made it possible to investigate the processing of speech in terms of probabilistic cortical maps rather than simple cognitive subtraction, which will enable us to relate neurometric data more directly to behavioural studies. We suggest that integration of insights from cognitive science, neurophysiology and functional imaging is necessary for furthering our understanding of pre-lexical abstraction in the cortex.


Assuntos
Córtex Auditivo/fisiologia , Reconhecimento Psicológico , Percepção da Fala , Fala/fisiologia , Humanos , Linguística
13.
Psychon Bull Rev ; 27(4): 707-715, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32319002

RESUMO

When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lipreading) information can be a helpful source of guidance. These two types of information embedded in speech can also guide perceptual adjustment, also known as recalibration or perceptual retuning. With retuning or recalibration, listeners can use these contextual cues to temporarily or permanently reconfigure internal representations of phoneme categories to adjust to and understand novel interlocutors more easily. These two types of perceptual learning, previously investigated in large part separately, are highly similar in allowing listeners to use speech-external information to make phoneme boundary adjustments. This study explored whether the two sources may work in conjunction to induce adaptation, thus emulating real life, in which listeners are indeed likely to encounter both types of cue together. Listeners who received combined audiovisual and lexical cues showed perceptual learning effects similar to listeners who only received audiovisual cues, while listeners who received only lexical cues showed weaker effects compared with the two other groups. The combination of cues did not lead to additive retuning or recalibration effects, suggesting that lexical and audiovisual cues operate differently with regard to how listeners use them for reshaping perceptual categories. Reaction times did not significantly differ across the three conditions, so none of the forms of adjustment were either aided or hindered by processing time differences. Mechanisms underlying these forms of perceptual learning may diverge in numerous ways despite similarities in experimental applications.


Assuntos
Adaptação Psicológica , Sinais (Psicologia) , Leitura Labial , Fonética , Percepção da Fala , Percepção Visual , Vocabulário , Adulto , Compreensão , Feminino , Humanos , Aprendizagem , Masculino , Tempo de Reação , Adulto Jovem
14.
Atten Percept Psychophys ; 82(4): 2018-2026, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-31970708

RESUMO

To adapt to situations in which speech perception is difficult, listeners can adjust boundaries between phoneme categories using perceptual learning. Such adjustments can draw on lexical information in surrounding speech, or on visual cues via speech-reading. In the present study, listeners proved they were able to flexibly adjust the boundary between two plosive/stop consonants, /p/-/t/, using both lexical and speech-reading information and given the same experimental design for both cue types. Videos of a speaker pronouncing pseudo-words and audio recordings of Dutch words were presented in alternating blocks of either stimulus type. Listeners were able to switch between cues to adjust phoneme boundaries, and resulting effects were comparable to results from listeners receiving only a single source of information. Overall, audiovisual cues (i.e., the videos) produced the stronger effects, commensurate with their applicability for adapting to noisy environments. Lexical cues were able to induce effects with fewer exposure stimuli and a changing phoneme bias, in a design unlike most prior studies of lexical retuning. While lexical retuning effects were relatively weaker compared to audiovisual recalibration, this discrepancy could reflect how lexical retuning may be more suitable for adapting to speakers than to environments. Nonetheless, the presence of the lexical retuning effects suggests that it may be invoked at a faster rate than previously seen. In general, this technique has further illuminated the robustness of adaptability in speech perception, and offers the potential to enable further comparisons across differing forms of perceptual learning.


Assuntos
Percepção Auditiva , Fonética , Percepção da Fala , Humanos , Idioma , Leitura Labial , Fala
15.
J Exp Psychol Learn Mem Cogn ; 46(1): 189-199, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30883166

RESUMO

Learning new words entails, inter alia, encoding of novel sound patterns and transferring those patterns from short-term to long-term memory. We report a series of 5 experiments that investigated whether the memory systems engaged in word learning are specialized for speech and whether utilization of these systems results in a benefit for word learning. Sine-wave synthesis (SWS) was applied to spoken nonwords, and listeners were or were not informed (through instruction and familiarization) that the SWS stimuli were derived from actual utterances. This allowed us to manipulate whether listeners would process sound sequences as speech or as nonspeech. In a sound-picture association learning task, listeners who processed the SWS stimuli as speech consistently learned faster and remembered more associations than listeners who processed the same stimuli as nonspeech. The advantage of listening in "speech mode" was stable over the course of 7 days. These results provide causal evidence that access to a specialized, phonological short-term memory system is important for word learning. More generally, this study supports the notion that subsystems of auditory short-term memory are specialized for processing different types of acoustic information. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Percepção Auditiva/fisiologia , Memória de Curto Prazo/fisiologia , Psicolinguística , Fala/fisiologia , Aprendizagem Verbal/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Reconhecimento Visual de Modelos/fisiologia , Percepção da Fala/fisiologia , Adulto Jovem
16.
Emotion ; 20(8): 1435-1445, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31478724

RESUMO

Are emotional expressions shaped by specialized innate mechanisms that guide learning, or do they develop exclusively from learning without innate preparedness? Here we test whether nonverbal affective vocalisations produced by bilaterally congenitally deaf adults contain emotional information that is recognisable to naive listeners. Because these deaf individuals have had no opportunity for auditory learning, the presence of such an association would imply that mappings between emotions and vocalizations are buffered against the absence of input that is typically important for their development and thus at least partly innate. We recorded nonverbal vocalizations expressing 9 emotions from 8 deaf individuals (435 tokens) and 8 matched hearing individuals (536 tokens). These vocalizations were submitted to an acoustic analysis and used in a recognition study in which naive listeners (n = 812) made forced-choice judgments. Our results show that naive listeners can reliably infer many emotional states from nonverbal vocalizations produced by deaf individuals. In particular, deaf vocalizations of fear, disgust, sadness, amusement, sensual pleasure, surprise, and relief were recognized at better-than-chance levels, whereas anger and achievement/triumph vocalizations were not. Differences were found on most acoustic features of the vocalizations produced by deaf as compared with hearing individuals. Our results suggest that there is an innate component to the associations between human emotions and vocalizations. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Assuntos
Percepção Auditiva/fisiologia , Emoções/fisiologia , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
17.
J Neurosci ; 28(32): 8116-23, 2008 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-18685036

RESUMO

Speech comprehension has been shown to be a strikingly bilateral process, but the differential contributions of the subfields of left and right auditory cortices have remained elusive. The hypothesis that left auditory areas engage predominantly in decoding fast temporal perturbations of a signal whereas the right areas are relatively more driven by changes of the frequency spectrum has not been directly tested in speech or music. This brain-imaging study independently manipulated the speech signal itself along the spectral and the temporal domain using noise-band vocoding. In a parametric design with five temporal and five spectral degradation levels in word comprehension, a functional distinction of the left and right auditory association cortices emerged: increases in the temporal detail of the signal were most effective in driving brain activation of the left anterolateral superior temporal sulcus (STS), whereas the right homolog areas exhibited stronger sensitivity to the variations in spectral detail. In accordance with behavioral measures of speech comprehension acquired in parallel, change of spectral detail exhibited a stronger coupling with the STS BOLD signal. The relative pattern of lateralization (quantified using lateralization quotients) proved reliable in a jack-knifed iterative reanalysis of the group functional magnetic resonance imaging model. This study supplies direct evidence to the often implied functional distinction of the two cerebral hemispheres in speech processing. Applying direct manipulations to the speech signal rather than to low-level surrogates, the results lend plausibility to the notion of complementary roles for the left and right superior temporal sulci in comprehending the speech signal.


Assuntos
Córtex Auditivo/fisiologia , Compreensão/fisiologia , Lateralidade Funcional/fisiologia , Inteligibilidade da Fala , Percepção da Fala/fisiologia , Percepção do Tempo/fisiologia , Adulto , Circulação Cerebrovascular , Dominância Cerebral/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Fonética , Projetos Piloto , Lobo Temporal/fisiologia
18.
Neuropsychologia ; 47(1): 123-31, 2009 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-18765243

RESUMO

Phonagnosia, the inability to recognize familiar voices, has been studied in brain-damaged patients but no cases due to developmental problems have been reported. Here we describe the case of KH, a 60-year-old active professional woman who reports that she has always experienced severe voice recognition difficulties. Her hearing abilities are normal, and an MRI scan showed no evidence of brain damage in regions associated with voice or auditory perception. To better understand her condition and to assess models of voice and high-level auditory processing, we tested KH on behavioural tasks measuring voice recognition, recognition of vocal emotions, face recognition, speech perception, and processing of environmental sounds and music. KH was impaired on tasks requiring the recognition of famous voices and the learning and recognition of new voices. In contrast, she performed well on nearly all other tasks. Her case is the first report of developmental phonagnosia, and the results suggest that the recognition of a speaker's vocal identity depends on separable mechanisms from those used to recognize other information from the voice or non-vocal auditory stimuli.


Assuntos
Agnosia/fisiopatologia , Percepção Auditiva , Reconhecimento Psicológico/fisiologia , Voz , Estimulação Acústica , Agnosia/patologia , Discriminação Psicológica , Progressão da Doença , Expressão Facial , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Pessoa de Meia-Idade , Testes Neuropsicológicos , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa
19.
Q J Exp Psychol (Hove) ; 72(10): 2371-2379, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30836818

RESUMO

Previous research on the effect of perturbed auditory feedback in speech production has focused on two types of responses. In the short term, speakers generate compensatory motor commands in response to unexpected perturbations. In the longer term, speakers adapt feedforward motor programmes in response to feedback perturbations, to avoid future errors. The current study investigated the relation between these two types of responses to altered auditory feedback. Specifically, it was hypothesised that consistency in previous feedback perturbations would influence whether speakers adapt their feedforward motor programmes. In an altered auditory feedback paradigm, formant perturbations were applied either across all trials (the consistent condition) or only to some trials, whereas the others remained unperturbed (the inconsistent condition). The results showed that speakers' responses were affected by feedback consistency, with stronger speech changes in the consistent condition compared with the inconsistent condition. Current models of speech-motor control can explain this consistency effect. However, the data also suggest that compensation and adaptation are distinct processes, which are not in line with all current models.


Assuntos
Retroalimentação Sensorial/fisiologia , Atividade Motora/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
20.
Sci Adv ; 5(9): eaax0262, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31555732

RESUMO

Learning to read is associated with the appearance of an orthographically sensitive brain region known as the visual word form area. It has been claimed that development of this area proceeds by impinging upon territory otherwise available for the processing of culturally relevant stimuli such as faces and houses. In a large-scale functional magnetic resonance imaging study of a group of individuals of varying degrees of literacy (from completely illiterate to highly literate), we examined cortical responses to orthographic and nonorthographic visual stimuli. We found that literacy enhances responses to other visual input in early visual areas and enhances representational similarity between text and faces, without reducing the extent of response to nonorthographic input. Thus, acquisition of literacy in childhood recycles existing object representation mechanisms but without destructive competition.


Assuntos
Aprendizagem/fisiologia , Imageamento por Ressonância Magnética , Estimulação Luminosa , Córtex Visual , Adulto , Feminino , Humanos , Masculino , Córtex Visual/diagnóstico por imagem , Córtex Visual/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA