Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Cereb Cortex ; 32(11): 2447-2468, 2022 05 31.
Artigo em Inglês | MEDLINE | ID: mdl-34585723

RESUMO

It is assumed that there are a static set of "language regions" in the brain. Yet, language comprehension engages regions well beyond these, and patients regularly produce familiar "formulaic" expressions when language regions are severely damaged. These suggest that the neurobiology of language is not fixed but varies with experiences, like the extent of word sequence learning. We hypothesized that perceiving overlearned sentences is supported by speech production and not putative language regions. Participants underwent 2 sessions of behavioral testing and functional magnetic resonance imaging (fMRI). During the intervening 15 days, they repeated 2 sentences 30 times each, twice a day. In both fMRI sessions, they "passively" listened to those sentences, novel sentences, and produced sentences. Behaviorally, evidence for overlearning included a 2.1-s decrease in reaction times to predict the final word in overlearned sentences. This corresponded to the recruitment of sensorimotor regions involved in sentence production, inactivation of temporal and inferior frontal regions involved in novel sentence listening, and a 45% change in global network organization. Thus, there was a profound whole-brain reorganization following sentence overlearning, out of "language" and into sensorimotor regions. The latter are generally preserved in aphasia and Alzheimer's disease, perhaps explaining residual abilities with formulaic expressions in both.


Assuntos
Idioma , Percepção da Fala , Mapeamento Encefálico , Compreensão/fisiologia , Humanos , Imageamento por Ressonância Magnética/métodos , Sobreaprendizagem , Fala/fisiologia , Percepção da Fala/fisiologia
2.
Proc Natl Acad Sci U S A ; 117(51): 32791-32798, 2020 12 22.
Artigo em Inglês | MEDLINE | ID: mdl-33293422

RESUMO

It is well established that speech perception is improved when we are able to see the speaker talking along with hearing their voice, especially when the speech is noisy. While we have a good understanding of where speech integration occurs in the brain, it is unclear how visual and auditory cues are combined to improve speech perception. One suggestion is that integration can occur as both visual and auditory cues arise from a common generator: the vocal tract. Here, we investigate whether facial and vocal tract movements are linked during speech production by comparing videos of the face and fast magnetic resonance (MR) image sequences of the vocal tract. The joint variation in the face and vocal tract was extracted using an application of principal components analysis (PCA), and we demonstrate that MR image sequences can be reconstructed with high fidelity using only the facial video and PCA. Reconstruction fidelity was significantly higher when images from the two sequences corresponded in time, and including implicit temporal information by combining contiguous frames also led to a significant increase in fidelity. A "Bubbles" technique was used to identify which areas of the face were important for recovering information about the vocal tract, and vice versa, on a frame-by-frame basis. Our data reveal that there is sufficient information in the face to recover vocal tract shape during speech. In addition, the facial and vocal tract regions that are important for reconstruction are those that are used to generate the acoustic speech signal.


Assuntos
Face , Percepção da Fala , Prega Vocal , Adulto , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Experimentação Humana não Terapêutica , Análise de Componente Principal , Acústica da Fala , Percepção Visual
3.
J Cogn Neurosci ; 33(8): 1517-1534, 2021 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-34496370

RESUMO

The role of the cerebellum in speech perception remains a mystery. Given its uniform architecture, we tested the hypothesis that it implements a domain-general predictive mechanism whose role in speech is determined by connectivity. We collated all neuroimaging studies reporting cerebellar activity in the Neurosynth database (n = 8206). From this set, we found all studies involving passive speech and sound perception (n = 72, 64% speech, 12.5% sounds, 12.5% music, and 11% tones) and speech production and articulation (n = 175). Standard and coactivation neuroimaging meta-analyses were used to compare cerebellar and associated cortical activations between passive perception and production. We found distinct regions of perception- and production-related activity in the cerebellum and regions of perception-production overlap. Each of these regions had distinct patterns of cortico-cerebellar connectivity. To test for domain-generality versus specificity, we identified all psychological and task-related terms in the Neurosynth database that predicted activity in cerebellar regions associated with passive perception and production. Regions in the cerebellum activated by speech perception were associated with domain-general terms related to prediction. One hallmark of predictive processing is metabolic savings (i.e., decreases in neural activity when events are predicted). To test the hypothesis that the cerebellum plays a predictive role in speech perception, we examined cortical activation between studies reporting cerebellar activation and those without cerebellar activation during speech perception. When the cerebellum was active during speech perception, there was far less cortical activation than when it was inactive. The results suggest that the cerebellum implements a domain-general mechanism related to prediction during speech perception.


Assuntos
Música , Percepção da Fala , Cerebelo/diagnóstico por imagem , Humanos , Fala
4.
Proc Biol Sci ; 288(1955): 20210500, 2021 07 28.
Artigo em Inglês | MEDLINE | ID: mdl-34284631

RESUMO

The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to participants while recording their electroencephalogram. We quantified multimodal cues (prosody, gestures, mouth movements) and measured their effect on a well-established electroencephalographic marker of processing load in comprehension (N400). We found that brain responses to words were affected by informativeness of co-occurring multimodal cues, indicating that comprehension relies on linguistic and non-linguistic cues. Moreover, they were affected by interactions between the multimodal cues, indicating that the impact of each cue dynamically changes based on the informativeness of other cues. Thus, results show that multimodal cues are integral to comprehension, hence, our theories must move beyond the limited focus on speech and linguistic processing.


Assuntos
Compreensão , Percepção da Fala , Eletroencefalografia , Potenciais Evocados , Feminino , Gestos , Humanos , Idioma , Masculino , Fala
5.
Sci Data ; 11(1): 1063, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39353978

RESUMO

Videos of magic tricks offer lots of opportunities to study the human mind. They violate the expectations of the viewer, causing prediction errors, misdirect attention, and elicit epistemic emotions. Herein we describe and share the Magic, Memory, and Curiosity (MMC) Dataset where 50 participants watched 36 magic tricks filmed and edited specifically for functional magnetic imaging (fMRI) experiments. The MMC Dataset includes a contextual incentive manipulation, curiosity ratings for the magic tricks, and incidental memory performance tested a week later. We additionally measured individual differences in working memory and constructs relevant to motivated learning. fMRI data were acquired before, during, and after learning. We show that both behavioural and fMRI data are of high quality, as indicated by basic validation analysis, i.e., variance decomposition as well as intersubject correlation and seed-based functional connectivity, respectively. The richness and complexity of the MMC Dataset will allow researchers to explore dynamic cognitive and motivational processes from various angles during task and rest.


Assuntos
Imageamento por Ressonância Magnética , Humanos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Motivação , Comportamento Exploratório , Masculino , Adulto , Memória , Feminino , Adulto Jovem , Memória de Curto Prazo
6.
Sci Rep ; 14(1): 2353, 2024 01 29.
Artigo em Inglês | MEDLINE | ID: mdl-38287084

RESUMO

Visual hallucinations can be phenomenologically divided into those of a simple or complex nature. Both simple and complex hallucinations can occur in pathological and non-pathological states, and can also be induced experimentally by visual stimulation or deprivation-for example using a high-frequency, eyes-open flicker (Ganzflicker) and perceptual deprivation (Ganzfeld). Here we leverage the differences in visual stimulation that these two techniques involve to investigate the role of bottom-up and top-down processes in shifting the complexity of visual hallucinations, and to assess whether these techniques involve a shared underlying hallucinatory mechanism despite their differences. For each technique, we measured the frequency and complexity of the hallucinations produced, utilising button presses, retrospective drawing, interviews, and questionnaires. For both experimental techniques, simple hallucinations were more common than complex hallucinations. Crucially, we found that Ganzflicker was more effective than Ganzfeld at eliciting simple hallucinations, while complex hallucinations remained equivalent across the two conditions. As a result, the likelihood that an experienced hallucination was complex was higher during Ganzfeld. Despite these differences, we found a correlation between the frequency and total time spent hallucinating in Ganzflicker and Ganzfeld conditions, suggesting some shared mechanisms between the two methodologies. We attribute the tendency to experience frequent simple hallucinations in both conditions to a shared low-level core hallucinatory mechanism, such as excitability of visual cortex, potentially amplified in Ganzflicker compared to Ganzfeld due to heightened bottom-up input. The tendency to experience complex hallucinations, in contrast, may be related to top-down processes less affected by visual stimulation.


Assuntos
Alucinações , Córtex Visual , Humanos , Estudos Retrospectivos , Alucinações/etiologia
7.
Dev Psychobiol ; 54(3): 332-42, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22415920

RESUMO

In this review, we consider the literature on sensitive periods for language acquisition from the perspective of the stroke recovery literature treated in this Special Issue. Conceptually, the two areas of study are linked in a number of ways. For example, the fact that learning itself can set the stage for future failures to learn (in second language learning) or to remediate (as described in constraint therapy) is an important insight in both areas, as is the increasing awareness that limits on learning can be overcome by creating the appropriate environmental context. Similar practical issues, such as distinguishing native-like language acquisition or recovery of function from compensatory mechanisms, arise in both areas as well.


Assuntos
Período Crítico Psicológico , Idioma , Plasticidade Neuronal/fisiologia , Recuperação de Função Fisiológica/fisiologia , Acidente Vascular Cerebral/fisiopatologia , Humanos
8.
Neurosci Biobehav Rev ; 140: 104772, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35835286

RESUMO

Most research on the neurobiology of language ignores consciousness and vice versa. Here, language, with an emphasis on inner speech, is hypothesised to generate and sustain self-awareness, i.e., higher-order consciousness. Converging evidence supporting this hypothesis is reviewed. To account for these findings, a 'HOLISTIC' model of neurobiology of language, inner speech, and consciousness is proposed. It involves a 'core' set of inner speech production regions that initiate the experience of feeling and hearing words. These take on affective qualities, deriving from activation of associated sensory, motor, and emotional representations, involving a largely unconscious dynamic 'periphery', distributed throughout the whole brain. Responding to those words forms the basis for sustained network activity, involving 'default mode' activation and prefrontal and thalamic/brainstem selection of contextually relevant responses. Evidence for the model is reviewed, supporting neuroimaging meta-analyses conducted, and comparisons with other theories of consciousness made. The HOLISTIC model constitutes a more parsimonious and complete account of the 'neural correlates of consciousness' that has implications for a mechanistic account of mental health and wellbeing.


Assuntos
Estado de Consciência , Idioma , Encéfalo , Humanos , Boca , Neurobiologia
9.
Neuropsychologia ; 169: 108194, 2022 05 03.
Artigo em Inglês | MEDLINE | ID: mdl-35245529

RESUMO

Rodent and human studies have implicated an amygdala-prefrontal circuit during threat processing. One possibility is that while amygdala activity underlies core features of anxiety (e.g. detection of salient information), prefrontal cortices (i.e. dorsomedial prefrontal/anterior cingulate cortex) entrain its responsiveness. To date, this has been established in tightly controlled paradigms (predominantly using static face perception tasks) but has not been extended to more naturalistic settings. Consequently, using 'movie fMRI'-in which participants watch ecologically-rich movie stimuli rather than constrained cognitive tasks-we sought to test whether individual differences in anxiety correlate with the degree of face-dependent amygdala-prefrontal coupling in two independent samples. Analyses suggested increased face-dependent superior parietal activation and decreased speech-dependent auditory cortex activation as a function of anxiety. However, we failed to find evidence for anxiety-dependent connectivity, neither in our stimulus-dependent or -independent analyses. Our findings suggest that work using experimentally constrained tasks may not replicate in more ecologically valid settings and, moreover, highlight the importance of testing the generalizability of neuroimaging findings outside of the original context.


Assuntos
Tonsila do Cerebelo , Filmes Cinematográficos , Tonsila do Cerebelo/diagnóstico por imagem , Ansiedade/diagnóstico por imagem , Transtornos de Ansiedade , Humanos , Imageamento por Ressonância Magnética/métodos , Vias Neurais/diagnóstico por imagem , Córtex Pré-Frontal
10.
IEEE Trans Med Imaging ; 41(6): 1431-1442, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34968175

RESUMO

We consider the challenges in extracting stimulus-related neural dynamics from other intrinsic processes and noise in naturalistic functional magnetic resonance imaging (fMRI). Most studies rely on inter-subject correlations (ISC) of low-level regional activity and neglect varying responses in individuals. We propose a novel, data-driven approach based on low-rank plus sparse ( [Formula: see text]) decomposition to isolate stimulus-driven dynamic changes in brain functional connectivity (FC) from the background noise, by exploiting shared network structure among subjects receiving the same naturalistic stimuli. The time-resolved multi-subject FC matrices are modeled as a sum of a low-rank component of correlated FC patterns across subjects, and a sparse component of subject-specific, idiosyncratic background activities. To recover the shared low-rank subspace, we introduce a fused version of principal component pursuit (PCP) by adding a fusion-type penalty on the differences between the columns of the low-rank matrix. The method improves the detection of stimulus-induced group-level homogeneity in the FC profile while capturing inter-subject variability. We develop an efficient algorithm via a linearized alternating direction method of multipliers to solve the fused-PCP. Simulations show accurate recovery by the fused-PCP even when a large fraction of FC edges are severely corrupted. When applied to natural fMRI data, our method reveals FC changes that were time-locked to auditory processing during movie watching, with dynamic engagement of sensorimotor systems for speech-in-noise. It also provides a better mapping to auditory content in the movie than ISC.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Algoritmos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Humanos , Imageamento por Ressonância Magnética/métodos , Filmes Cinematográficos
11.
Neuron ; 56(6): 1116-26, 2007 Dec 20.
Artigo em Inglês | MEDLINE | ID: mdl-18093531

RESUMO

Is there a neural representation of speech that transcends its sensory properties? Using fMRI, we investigated whether there are brain areas where neural activity during observation of sublexical audiovisual input corresponds to a listener's speech percept (what is "heard") independent of the sensory properties of the input. A target audiovisual stimulus was preceded by stimuli that (1) shared the target's auditory features (auditory overlap), (2) shared the target's visual features (visual overlap), or (3) shared neither the target's auditory or visual features but were perceived as the target (perceptual overlap). In two left-hemisphere regions (pars opercularis, planum polare), the target invoked less activity when it was preceded by the perceptually overlapping stimulus than when preceded by stimuli that shared one of its sensory components. This pattern of neural facilitation indicates that these regions code sublexical speech at an abstract level corresponding to that of the speech percept.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Processos Mentais , Reconhecimento Visual de Modelos/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Encéfalo/irrigação sanguínea , Eletroencefalografia , Lateralidade Funcional/fisiologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Individualidade , Imageamento por Ressonância Magnética/métodos , Oxigênio/sangue , Estimulação Luminosa/métodos , Tempo de Reação
12.
J Neurosci ; 30(3): 1110-7, 2010 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-20089919

RESUMO

Functional magnetic resonance imaging (fMRI) studies of speech sound categorization often compare conditions in which a stimulus is presented repeatedly to conditions in which multiple stimuli are presented. This approach has established that a set of superior temporal and inferior parietal regions respond more strongly to conditions containing stimulus change. Here, we examine whether this contrast is driven by habituation to a repeating condition or by selective responding to change. Experiment 1 directly tests this by comparing the observed response to long trains of stimuli against a constructed hemodynamic response modeling the hypothesis that no habituation occurs. The results are consistent with the view that enhanced response to conditions involving phonemic variability reflect change detection. In a second experiment, the specificity of these responses to linguistically relevant stimulus variability was studied by including a condition in which the talker, rather than phonemic category, was variable from stimulus to stimulus. In this context, strong change detection responses were observed to changes in talker, but not to changes in phoneme category. The results prompt a reconsideration of two assumptions common to fMRI studies of speech sound categorization: they suggest that temporoparietal responses in passive paradigms such as those used here are better characterized as reflecting change detection than habituation, and that their apparent selectivity to speech sound categories may reflect a more general preference for variability in highly salient or behaviorally relevant stimulus dimensions.


Assuntos
Mapeamento Encefálico , Inibição Psicológica , Imageamento por Ressonância Magnética , Lobo Parietal/irrigação sanguínea , Percepção da Fala/fisiologia , Lobo Temporal/irrigação sanguínea , Estimulação Acústica/métodos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Oxigênio/sangue , Lobo Parietal/fisiologia , Fonética , Psicolinguística/métodos , Estatística como Assunto , Lobo Temporal/fisiologia , Fatores de Tempo
13.
Sci Data ; 7(1): 347, 2020 10 13.
Artigo em Inglês | MEDLINE | ID: mdl-33051448

RESUMO

Neuroimaging has advanced our understanding of human psychology using reductionist stimuli that often do not resemble information the brain naturally encounters. It has improved our understanding of the network organization of the brain mostly through analyses of 'resting-state' data for which the functions of networks cannot be verifiably labelled. We make a 'Naturalistic Neuroimaging Database' (NNDb v1.0) publically available to allow for a more complete understanding of the brain under more ecological conditions during which networks can be labelled. Eighty-six participants underwent behavioural testing and watched one of 10 full-length movies while functional magnetic resonance imaging was acquired. Resulting timeseries data are shown to be of high quality, with good signal-to-noise ratio, few outliers and low movement. Data-driven functional analyses provide further evidence of data quality. They also demonstrate accurate timeseries/movie alignment and how movie annotations might be used to label networks. The NNDb can be used to answer questions previously unaddressed with standard neuroimaging approaches, progressing our knowledge of how the brain works in the real world.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Imageamento por Ressonância Magnética , Bases de Dados Factuais , Humanos
14.
Sci Rep ; 10(1): 11298, 2020 07 09.
Artigo em Inglês | MEDLINE | ID: mdl-32647183

RESUMO

Stories play a fundamental role in human culture. They provide a mechanism for sharing cultural identity, imparting knowledge, revealing beliefs, reinforcing social bonds and providing entertainment that is central to all human societies. Here we investigated the extent to which the delivery medium of a story (audio or visual) affected self-reported and physiologically measured engagement with the narrative. Although participants self-reported greater involvement for watching video relative to listening to auditory scenes, stronger physiological responses were recorded for auditory stories. Sensors placed at their wrists showed higher and more variable heart rates, greater electrodermal activity, and even higher body temperatures. We interpret these findings as evidence that the stories were more cognitively and emotionally engaging at a physiological level when presented in an auditory format. This may be because listening to a story, rather than watching a video, is a more active process of co-creation, and that this imaginative process in the listener's mind is detectable on the skin at their wrist.


Assuntos
Percepção Auditiva , Narração , Percepção Visual , Adolescente , Adulto , Temperatura Corporal , Emoções , Frequência Cardíaca , Humanos , Pessoa de Meia-Idade , Autorrelato , Adulto Jovem
15.
Hum Brain Mapp ; 30(11): 3509-26, 2009 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-19384890

RESUMO

Everyday communication is accompanied by visual information from several sources, including co-speech gestures, which provide semantic information listeners use to help disambiguate the speaker's message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory-only (speech alone) condition. In the first audiovisual condition, the storyteller produced gestures that naturally accompany speech. In the second, the storyteller made semantically unrelated hand movements. In the third, the storyteller kept her hands still. In addition to inferior parietal and posterior superior and middle temporal regions, bilateral posterior superior temporal sulcus and left anterior inferior frontal gyrus responded more strongly to speech when it was further accompanied by gesture, regardless of the semantic relation to speech. However, the right inferior frontal gyrus was sensitive to the semantic import of the hand movements, demonstrating more activity when hand movements were semantically unrelated to the accompanying speech. These findings show that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech.


Assuntos
Mapeamento Encefálico , Encéfalo/irrigação sanguínea , Encéfalo/fisiologia , Gestos , Semântica , Fala/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Modelos Lineares , Imageamento por Ressonância Magnética/métodos , Masculino , Percepção de Movimento/fisiologia , Oxigênio/sangue , Estimulação Luminosa/métodos , Fatores de Tempo , Adulto Jovem
16.
Brain Lang ; 101(3): 260-77, 2007 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-17533001

RESUMO

Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca's area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a "mirror" or "observation-execution matching" system). We asked whether the role that Broca's area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca's area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca's area and other cortical areas because speech-associated gestures are goal-direct actions that are "mirrored"). We compared the functional connectivity of Broca's area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca's area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements.


Assuntos
Lobo Frontal/fisiologia , Gestos , Semântica , Percepção da Fala/fisiologia , Fala/fisiologia , Criança , Face , Feminino , Mãos , Humanos , Imageamento por Ressonância Magnética , Masculino , Modelos Neurológicos , Córtex Motor/fisiologia , Análise Multivariada , Reconhecimento Fisiológico de Modelo , Psicolinguística , Incerteza
17.
Brain Lang ; 164: 77-105, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27821280

RESUMO

Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.


Assuntos
Audição/fisiologia , Desempenho Psicomotor/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Língua/fisiologia , Animais , Encéfalo/fisiologia , Humanos
18.
Philos Trans R Soc Lond B Biol Sci ; 369(1651): 20130297, 2014 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-25092665

RESUMO

What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we 'hear' during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds.


Assuntos
Córtex Auditivo/fisiologia , Gestos , Idioma , Modelos Psicológicos , Neuroimagem/métodos , Percepção da Fala/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Bases de Dados Factuais , Humanos , Funções Verossimilhança , Modelos Neurológicos
20.
Q J Exp Psychol (Hove) ; 64(7): 1442-56, 2011 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-21604232

RESUMO

During a conversation, we hear the sound of the talker as well as the intended message. Traditional models of speech perception posit that acoustic details of a talker's voice are not encoded with the message whereas more recent models propose that talker identity is automatically encoded. When shadowing speech, listeners often fail to detect a change in talker identity. The present study was designed to investigate whether talker changes would be detected when listeners are actively engaged in a normal conversation, and visual information about the speaker is absent. Participants were called on the phone, and during the conversation the experimenter was surreptitiously replaced by another talker. Participants rarely noticed the change. However, when explicitly monitoring for a change, detection increased. Voice memory tests suggested that participants remembered only coarse information about both voices, rather than fine details. This suggests that although listeners are capable of change detection, voice information is not continuously monitored at a fine-grain level of acoustic representation during natural conversation and is not automatically encoded. Conversational expectations may shape the way we direct attention to voice characteristics and perceive differences in voice.


Assuntos
Surdez/fisiopatologia , Idioma , Reconhecimento Psicológico/fisiologia , Acústica da Fala , Percepção da Fala/fisiologia , Telefone , Estimulação Acústica/métodos , Feminino , Humanos , Masculino , Fonética , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA