Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Heliyon ; 10(2): e24750, 2024 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-38312568

RESUMO

Objective: Lipreading, which plays a major role in the communication of the hearing impaired, lacked a French standardised tool. Our aim was to create and validate an audio-visual (AV) version of the French Matrix Sentence Test (FrMST). Design: Video recordings were created by dubbing the existing audio files. Sample: Thirty-five young, normal-hearing participants were tested in auditory and visual modalities alone (Ao, Vo) and in AV conditions, in quiet, noise, and open and closed-set response formats. Results: Lipreading ability (Vo) ranged from 1 % to 77%-word comprehension. The absolute AV benefit was 9.25 dB SPL in quiet and 4.6 dB SNR in noise. The response format did not influence the results in the AV noise condition, except during the training phase. Lipreading ability and AV benefit were significantly correlated. Conclusions: The French video material achieved similar AV benefits as those described in the literature for AV MST in other languages. For clinical purposes, we suggest targeting SRT80 to avoid ceiling effects, and performing two training lists in the AV condition in noise, followed by one AV list in noise, one Ao list in noise and one Vo list, in a randomised order, in open or close set-format.

2.
PLoS Comput Biol ; 19(11): e1011669, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38011225

RESUMO

Humans excel at predictively synchronizing their behavior with external rhythms, as in dance or music performance. The neural processes underlying rhythmic inferences are debated: whether predictive perception relies on high-level generative models or whether it can readily be implemented locally by hard-coded intrinsic oscillators synchronizing to rhythmic input remains unclear and different underlying computational mechanisms have been proposed. Here we explore human perception for tone sequences with some temporal regularity at varying rates, but with considerable variability. Next, using a dynamical systems perspective, we successfully model the participants behavior using an adaptive frequency oscillator which adjusts its spontaneous frequency based on the rate of stimuli. This model better reflects human behavior than a canonical nonlinear oscillator and a predictive ramping model-both widely used for temporal estimation and prediction-and demonstrate that the classical distinction between absolute and relative computational mechanisms can be unified under this framework. In addition, we show that neural oscillators may constitute hard-coded physiological priors-in a Bayesian sense-that reduce temporal uncertainty and facilitate the predictive processing of noisy rhythms. Together, the results show that adaptive oscillators provide an elegant and biologically plausible means to subserve rhythmic inference, reconciling previously incompatible frameworks for temporal inferential processes.


Assuntos
Música , Percepção do Tempo , Humanos , Teorema de Bayes
3.
Trends Hear ; 27: 23312165231156412, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36794429

RESUMO

Age-related hearing loss, presbycusis, is an unavoidable sensory degradation, often associated with the progressive decline of cognitive and social functions, and dementia. It is generally considered a natural consequence of the inner-ear deterioration. However, presbycusis arguably conflates a wide array of peripheral and central impairments. Although hearing rehabilitation maintains the integrity and activity of auditory networks and can prevent or revert maladaptive plasticity, the extent of such neural plastic changes in the aging brain is poorly appreciated. By reanalyzing a large-scale dataset of more than 2200 cochlear implant users (CI) and assessing the improvement in speech perception from 6 to 24 months of use, we show that, although rehabilitation improves speech understanding on average, age at implantation only minimally affects speech scores at 6 months but has a pejorative effect at 24 months post implantation. Furthermore, older subjects (>67 years old) were significantly more likely to degrade their performances after 2 years of CI use than the younger patients for each year increase in age. Secondary analysis reveals three possible plasticity trajectories after auditory rehabilitation to account for these disparities: Awakening, reversal of deafness-specific changes; Counteracting, stabilization of additional cognitive impairments; or Decline, independent pejorative processes that hearing rehabilitation cannot prevent. The role of complementary behavioral interventions needs to be considered to potentiate the (re)activation of auditory brain networks.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Presbiacusia , Percepção da Fala , Humanos , Lactente , Idoso , Presbiacusia/diagnóstico , Surdez/reabilitação , Audição , Envelhecimento , Encéfalo
4.
PLoS Biol ; 20(7): e3001742, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35905075

RESUMO

Categorising voices is crucial for auditory-based social interactions. A recent study by Rupp and colleagues in PLOS Biology capitalises on human intracranial recordings to describe the spatiotemporal pattern of neural activity leading to voice-selective responses in associative auditory cortex.


Assuntos
Percepção Auditiva , Voz , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Mapeamento Encefálico , Humanos , Lobo Temporal , Voz/fisiologia
5.
Nat Commun ; 13(1): 338, 2022 01 17.
Artigo em Inglês | MEDLINE | ID: mdl-35039498

RESUMO

Making accurate decisions based on unreliable sensory evidence requires cognitive inference. Dysfunction of n-methyl-d-aspartate (NMDA) receptors impairs the integration of noisy input in theoretical models of neural circuits, but whether and how this synaptic alteration impairs human inference and confidence during uncertain decisions remains unknown. Here we use placebo-controlled infusions of ketamine to characterize the causal effect of human NMDA receptor hypofunction on cognitive inference and its neural correlates. At the behavioral level, ketamine triggers inference errors and elevated decision uncertainty. At the neural level, ketamine is associated with imbalanced coding of evidence and premature response preparation in electroencephalographic (EEG) activity. Through computational modeling of inference and confidence, we propose that this specific pattern of behavioral and neural impairments reflects an early commitment to inaccurate decisions, which aims at resolving the abnormal uncertainty generated by NMDA receptor hypofunction.


Assuntos
Tomada de Decisões , Receptores de N-Metil-D-Aspartato/metabolismo , Incerteza , Adulto , Teorema de Bayes , Encéfalo/efeitos dos fármacos , Encéfalo/fisiologia , Cognição/efeitos dos fármacos , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Ketamina/administração & dosagem , Ketamina/farmacologia , Masculino , Psicometria , Análise e Desempenho de Tarefas , Fatores de Tempo
6.
Nat Commun ; 13(1): 48, 2022 01 10.
Artigo em Inglês | MEDLINE | ID: mdl-35013268

RESUMO

Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding.


Assuntos
Interfaces Cérebro-Computador , Eletrocorticografia , Idioma , Fala , Adulto , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Eletrodos , Feminino , Humanos , Imaginação , Masculino , Pessoa de Meia-Idade , Fonética , Adulto Jovem
7.
PLoS Biol ; 18(9): e3000833, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32898188

RESUMO

The phonological deficit in dyslexia is associated with altered low-gamma oscillatory function in left auditory cortex, but a causal relationship between oscillatory function and phonemic processing has never been established. After confirming a deficit at 30 Hz with electroencephalography (EEG), we applied 20 minutes of transcranial alternating current stimulation (tACS) to transiently restore this activity in adults with dyslexia. The intervention significantly improved phonological processing and reading accuracy as measured immediately after tACS. The effect occurred selectively for a 30-Hz stimulation in the dyslexia group. Importantly, we observed that the focal intervention over the left auditory cortex also decreased 30-Hz activity in the right superior temporal cortex, resulting in reinstating a left dominance for the oscillatory response. These findings establish a causal role of neural oscillations in phonological processing and offer solid neurophysiological grounds for a potential correction of low-gamma anomalies and for alleviating the phonological deficit in dyslexia.


Assuntos
Dislexia/terapia , Leitura , Percepção da Fala , Adolescente , Adulto , Córtex Auditivo/fisiopatologia , Córtex Auditivo/efeitos da radiação , Dislexia/fisiopatologia , Eletroencefalografia , Potenciais Evocados Auditivos/fisiologia , Potenciais Evocados Auditivos/efeitos da radiação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fonética , Percepção da Fala/fisiologia , Percepção da Fala/efeitos da radiação , Estimulação Transcraniana por Corrente Contínua/métodos , Comportamento Verbal/fisiologia , Comportamento Verbal/efeitos da radiação , Adulto Jovem
8.
J Acoust Soc Am ; 147(6): EL540, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32611175

RESUMO

One way music is thought to convey emotion is by mimicking acoustic features of affective human vocalizations [Juslin and Laukka (2003). Psychol. Bull. 129(5), 770-814]. Regarding fear, it has been informally noted that music for scary scenes in films frequently exhibits a "scream-like" character. Here, this proposition is formally tested. This paper reports acoustic analyses for four categories of audio stimuli: screams, non-screaming vocalizations, scream-like music, and non-scream-like music. Valence and arousal ratings were also collected. Results support the hypothesis that a key feature of human screams (roughness) is imitated by scream-like music and could potentially signal danger through both music and the voice.


Assuntos
Música , Voz , Acústica , Animais , Nível de Alerta , Bovinos , Emoções , Humanos , Masculino
9.
Neurosci Biobehav Rev ; 107: 136-142, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31518638

RESUMO

In the motor cortex, beta oscillations (∼12-30 Hz) are generally considered a principal rhythm contributing to movement planning and execution. Beta oscillations cohabit and dynamically interact with slow delta oscillations (0.5-4 Hz), but the role of delta oscillations and the subordinate relationship between these rhythms in the perception-action loop remains unclear. Here, we review evidence that motor delta oscillations shape the dynamics of motor behaviors and sensorimotor processes, in particular during auditory perception. We describe the functional coupling between delta and beta oscillations in the motor cortex during spontaneous and planned motor acts. In an active sensing framework, perception is strongly shaped by motor activity, in particular in the delta band, which imposes temporal constraints on the sampling of sensory information. By encoding temporal contextual information, delta oscillations modulate auditory processing and impact behavioral outcomes. Finally, we consider the contribution of motor delta oscillations in the perceptual analysis of speech signals, providing a contextual temporal frame to optimize the parsing and processing of slow linguistic information.


Assuntos
Percepção Auditiva/fisiologia , Ritmo Delta/fisiologia , Córtex Motor/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Humanos , Fala
10.
Camb Q Healthc Ethics ; 28(4): 657-670, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31475659

RESUMO

Neuroprosthetic speech devices are an emerging technology that can offer the possibility of communication to those who are unable to speak. Patients with 'locked in syndrome,' aphasia, or other such pathologies can use covert speech-vividly imagining saying something without actual vocalization-to trigger neural controlled systems capable of synthesizing the speech they would have spoken, but for their impairment.We provide an analysis of the mechanisms and outputs involved in speech mediated by neuroprosthetic devices. This analysis provides a framework for accounting for the ethical significance of accuracy, control, and pragmatic dimensions of prosthesis-mediated speech. We first examine what it means for the output of the device to be accurate, drawing a distinction between technical accuracy on the one hand and semantic accuracy on the other. These are conceptual notions of accuracy.Both technical and semantic accuracy of the device will be necessary (but not yet sufficient) for the user to have sufficient control over the device. Sufficient control is an ethical consideration: we place high value on being able to express ourselves when we want and how we want. Sufficient control of a neural speech prosthesis requires that a speaker can reliably use their speech apparatus as they want to, and can expect their speech to authentically represent them. We draw a distinction between two relevant features which bear on the question of whether the user has sufficient control: voluntariness of the speech and the authenticity of the speech. These can come apart: the user might involuntarily produce an authentic output (perhaps revealing private thoughts) or might voluntarily produce an inauthentic output (e.g., when the output is not semantically accurate). Finally, we consider the role of the interlocutor in interpreting the content and purpose of the communication.These three ethical dimensions raise philosophical questions about the nature of speech, the level of control required for communicative accuracy, and the nature of 'accuracy' with respect to both natural and prosthesis-mediated speech.


Assuntos
Auxiliares de Comunicação para Pessoas com Deficiência/ética , Auxiliares de Comunicação para Pessoas com Deficiência/normas , Próteses Neurais , Voz Alaríngea , Interfaces Cérebro-Computador/ética , Interfaces Cérebro-Computador/normas , Eletroencefalografia , Humanos , Próteses Neurais/ética , Semântica
11.
Nat Commun ; 10(1): 3671, 2019 08 14.
Artigo em Inglês | MEDLINE | ID: mdl-31413319

RESUMO

Being able to produce sounds that capture attention and elicit rapid reactions is the prime goal of communication. One strategy, exploited by alarm signals, consists in emitting fast but perceptible amplitude modulations in the roughness range (30-150 Hz). Here, we investigate the perceptual and neural mechanisms underlying aversion to such temporally salient sounds. By measuring subjective aversion to repetitive acoustic transients, we identify a nonlinear pattern of aversion restricted to the roughness range. Using human intracranial recordings, we show that rough sounds do not merely affect local auditory processes but instead synchronise large-scale, supramodal, salience-related networks in a steady-state, sustained manner. Rough sounds synchronise activity throughout superior temporal regions, subcortical and cortical limbic areas, and the frontal cortex, a network classically involved in aversion processing. This pattern correlates with subjective aversion in all these regions, consistent with the hypothesis that roughness enhances auditory aversion through spreading of neural synchronisation.


Assuntos
Atenção , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Som , Estimulação Acústica , Acústica , Adolescente , Adulto , Vias Auditivas/fisiologia , Epilepsia Resistente a Medicamentos/cirurgia , Eletrocorticografia , Epilepsias Parciais/cirurgia , Feminino , Humanos , Masculino , Fatores de Tempo , Adulto Jovem
12.
Neuron ; 100(5): 1022-1024, 2018 12 05.
Artigo em Inglês | MEDLINE | ID: mdl-30521776

RESUMO

Predictive coding and neural oscillations are two descriptive levels of brain functioning whose overlap is not yet understood. Chao et al. (2018) now show that hierarchical predictive coding is instantiated by asymmetric information channeling in the γ and α/ß oscillatory ranges.


Assuntos
Encéfalo , Primatas , Animais
13.
Trends Cogn Sci ; 22(10): 870-882, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30266147

RESUMO

The ability to predict when something will happen facilitates sensory processing and the ensuing computations. Building on the observation that neural activity entrains to periodic stimulation, leading neurophysiological models imply that temporal predictions rely on oscillatory entrainment. Although they provide a sufficient solution to predict periodic regularities, these models are challenged by a series of findings that question their suitability to account for temporal predictions based on aperiodic regularities. Aiming for a more comprehensive model of how the brain anticipates 'when' in auditory contexts, we emphasize the capacity of motor and higher-order top-down systems to prepare sensory processing in a proactive and temporally flexible manner. Focusing on speech processing, we illustrate how this framework leads to new hypotheses.


Assuntos
Antecipação Psicológica/fisiologia , Percepção Auditiva/fisiologia , Ondas Encefálicas/fisiologia , Fatores de Tempo , Percepção do Tempo/fisiologia , Humanos
14.
Physiol Behav ; 193(Pt A): 43-54, 2018 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-29730041

RESUMO

Crying is the principal means by which newborn infants shape parental behavior to meet their needs. While this mechanism can be highly effective, infant crying can also be an aversive stimulus that leads to parental frustration and even abuse. Fathers have recently become more involved in direct caregiving activities in modern, developed nations, and fathers are more likely than mothers to physically abuse infants. In this study, we attempt to explain variation in the neural response to infant crying among human fathers, with the hope of identifying factors that are associated with a more or less sensitive response. We imaged brain function in 39 first-time fathers of newborn infants as they listened to both their own and a standardized unknown infant cry stimulus, as well as auditory control stimuli, and evaluated whether these neural responses were correlated with measured characteristics of fathers and infants that were hypothesized to modulate these responses. Fathers also provided subjective ratings of each cry stimulus on multiple dimensions. Fathers showed widespread activation to both own and unknown infant cries in neural systems involved in empathy and approach motivation. There was no significant difference in the neural response to the own vs. unknown infant cry, and many fathers were unable to distinguish between the two cries. Comparison of these results with previous studies in mothers revealed a high degree of similarity between first-time fathers and first-time mothers in the pattern of neural activation to newborn infant cries. Further comparisons suggested that younger infant age was associated with stronger paternal neural responses, perhaps due to hormonal or novelty effects. In our sample, older fathers found infant cries less aversive and had an attenuated response to infant crying in both the dorsal anterior cingulate cortex (dACC) and the anterior insula, suggesting that compared with younger fathers, older fathers may be better able to avoid the distress associated with empathic over-arousal in response to infant cries. A principal components analysis revealed that fathers with more negative emotional reactions to the unknown infant cry showed decreased activation in the thalamus and caudate nucleus, regions expected to promote positive parental behaviors, as well as increased activation in the hypothalamus and dorsal ACC, again suggesting that empathic over-arousal might result in negative emotional reactions to infant crying. In sum, our findings suggest that infant age, paternal age and paternal emotional reactions to infant crying all modulate the neural response of fathers to infant crying. By identifying neural correlates of variation in paternal subjective reactions to infant crying, these findings help lay the groundwork for evaluating the effectiveness of interventions designed to increase paternal sensitivity and compassion.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Choro , Relações Pais-Filho , Comportamento Paterno/fisiologia , Adulto , Envelhecimento/fisiologia , Envelhecimento/psicologia , Mapeamento Encefálico , Emoções/fisiologia , Feminino , Humanos , Individualidade , Lactente , Recém-Nascido , Imageamento por Ressonância Magnética , Masculino , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiologia , Comportamento Paterno/psicologia , Reconhecimento Fisiológico de Modelo/fisiologia , Percepção Social , Testosterona/metabolismo , Adulto Jovem
15.
Commun Integr Biol ; 10(5-6): e1349583, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29260797

RESUMO

The ability to precisely anticipate the timing of upcoming events at the time-scale of seconds is essential to predict objects' trajectories or to select relevant sensory information. What neurophysiological mechanism underlies the temporal precision in anticipating the occurrence of events? In a recent article,1 we demonstrated that the sensori-motor system predictively controls neural oscillations in time to optimize sensory selection. However, whether and how the same oscillatory processes can be used to keep track of elapsing time and evaluate short durations remains unclear. Here, we aim at testing the hypothesis that the brain tracks durations by converting (external, objective) elapsing time into an (internal, subjective) oscillatory phase-angle. To test this, we measured magnetoencephalographic oscillatory activity while participants performed a delayed-target detection task. In the delayed condition, we observe that trials that are perceived as longer are associated with faster delta-band oscillations. This suggests that the subjective indexing of time is reflected in the range of phase-angles covered by delta oscillations during the pre-stimulus period. This result provides new insights into how we predict and evaluate temporal structure and support models in which the active entrainment of sensori-motor oscillatory dynamics is exploited to track elapsing time.

16.
J Neurosci ; 37(33): 7930-7938, 2017 08 16.
Artigo em Inglês | MEDLINE | ID: mdl-28729443

RESUMO

Recent psychophysics data suggest that speech perception is not limited by the capacity of the auditory system to encode fast acoustic variations through neural γ activity, but rather by the time given to the brain to decode them. Whether the decoding process is bounded by the capacity of θ rhythm to follow syllabic rhythms in speech, or constrained by a more endogenous top-down mechanism, e.g., involving ß activity, is unknown. We addressed the dynamics of auditory decoding in speech comprehension by challenging syllable tracking and speech decoding using comprehensible and incomprehensible time-compressed auditory sentences. We recorded EEGs in human participants and found that neural activity in both θ and γ ranges was sensitive to syllabic rate. Phase patterns of slow neural activity consistently followed the syllabic rate (4-14 Hz), even when this rate went beyond the classical θ range (4-8 Hz). The power of θ activity increased linearly with syllabic rate but showed no sensitivity to comprehension. Conversely, the power of ß (14-21 Hz) activity was insensitive to the syllabic rate, yet reflected comprehension on a single-trial basis. We found different long-range dynamics for θ and ß activity, with ß activity building up in time while more contextual information becomes available. This is consistent with the roles of θ and ß activity in stimulus-driven versus endogenous mechanisms. These data show that speech comprehension is constrained by concurrent stimulus-driven θ and low-γ activity, and by endogenous ß activity, but not primarily by the capacity of θ activity to track the syllabic rhythm.SIGNIFICANCE STATEMENT Speech comprehension partly depends on the ability of the auditory cortex to track syllable boundaries with θ-range neural oscillations. The reason comprehension drops when speech is accelerated could hence be because θ oscillations can no longer follow the syllabic rate. Here, we presented subjects with comprehensible and incomprehensible accelerated speech, and show that neural phase patterns in the θ band consistently reflect the syllabic rate, even when speech becomes too fast to be intelligible. The drop in comprehension, however, is signaled by a significant decrease in the power of low-ß oscillations (14-21 Hz). These data suggest that speech comprehension is not limited by the capacity of θ oscillations to adapt to syllabic rate, but by an endogenous decoding process.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Ritmo beta/fisiologia , Compreensão/fisiologia , Percepção da Fala/fisiologia , Ritmo Teta/fisiologia , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Distribuição Aleatória , Fala/fisiologia , Fatores de Tempo , Adulto Jovem
18.
J Neurosci ; 36(8): 2342-7, 2016 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-26911682

RESUMO

Predicting not only what will happen, but also when it will happen is extremely helpful for optimizing perception and action. Temporal predictions driven by periodic stimulation increase perceptual sensitivity and reduce response latencies. At the neurophysiological level, a single mechanism has been proposed to mediate this twofold behavioral improvement: the rhythmic entrainment of slow cortical oscillations to the stimulation rate. However, temporal regularities can occur in aperiodic contexts, suggesting that temporal predictions per se may be dissociable from entrainment to periodic sensory streams. We investigated this possibility in two behavioral experiments, asking human participants to detect near-threshold auditory tones embedded in streams whose temporal and spectral properties were manipulated. While our findings confirm that periodic stimulation reduces response latencies, in agreement with the hypothesis of a stimulus-driven entrainment of neural excitability, they further reveal that this motor facilitation can be dissociated from the enhancement of auditory sensitivity. Perceptual sensitivity improvement is unaffected by the nature of temporal regularities (periodic vs aperiodic), but contingent on the co-occurrence of a fulfilled spectral prediction. Altogether, the dissociation between predictability and periodicity demonstrates that distinct mechanisms flexibly and synergistically operate to facilitate perception and action.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Periodicidade , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Adolescente , Adulto , Feminino , Previsões , Humanos , Masculino , Pessoa de Meia-Idade , Fatores de Tempo , Adulto Jovem
19.
Curr Biol ; 25(15): 2051-6, 2015 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-26190070

RESUMO

Screaming is arguably one of the most relevant communication signals for survival in humans. Despite their practical relevance and their theoretical significance as innate [1] and virtually universal [2, 3] vocalizations, what makes screams a unique signal and how they are processed is not known. Here, we use acoustic analyses, psychophysical experiments, and neuroimaging to isolate those features that confer to screams their alarming nature, and we track their processing in the human brain. Using the modulation power spectrum (MPS [4, 5]), a recently developed, neurally informed characterization of sounds, we demonstrate that human screams cluster within restricted portion of the acoustic space (between ∼30 and 150 Hz modulation rates) that corresponds to a well-known perceptual attribute, roughness. In contrast to the received view that roughness is irrelevant for communication [6], our data reveal that the acoustic space occupied by the rough vocal regime is segregated from other signals, including speech, a pre-requisite to avoid false alarms in normal vocal communication. We show that roughness is present in natural alarm signals as well as in artificial alarms and that the presence of roughness in sounds boosts their detection in various tasks. Using fMRI, we show that acoustic roughness engages subcortical structures critical to rapidly appraise danger. Altogether, these data demonstrate that screams occupy a privileged acoustic niche that, being separated from other communication signals, ensures their biological and ultimately social efficiency.


Assuntos
Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Som , Adulto Jovem
20.
Handb Clin Neurol ; 129: 85-98, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25726264

RESUMO

Speech is a complex acoustic signal showing a quasiperiodic structure at several timescales. Integrated neural signals recorded in the cortex also show periodicity at different timescales. In this chapter we outline the neural mechanisms that potentially allow the auditory cortex to segment and encode continuous speech. This chapter focuses on how the human auditory cortex uses the temporal structure of the acoustic signal to extract phonemes and syllables, the two major constituents of connected speech. We argue that the quasiperiodic structure of collective neural activity in auditory cortex represents the ideal mechanical infrastructure to fractionate continuous speech into linguistic constituents of variable sizes.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica , Animais , Lateralidade Funcional , Humanos , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...