RESUMEN
Evidence accumulates that the cerebellum's role in the brain is not restricted to motor functions. Rather, cerebellar activity seems to be crucial for a variety of tasks that rely on precise event timing and prediction. Due to its complex structure and importance in communication, human speech requires a particularly precise and predictive coordination of neural processes to be successfully comprehended. Recent studies proposed that the cerebellum is indeed a major contributor to speech processing, but how this contribution is achieved mechanistically remains poorly understood. The current study aimed to reveal a mechanism underlying cortico-cerebellar coordination and demonstrate its speech-specificity. In a reanalysis of magnetoencephalography data, we found that activity in the cerebellum aligned to rhythmic sequences of noise-vocoded speech, irrespective of its intelligibility. We then tested whether these "entrained" responses persist, and how they interact with other brain regions, when a rhythmic stimulus stopped and temporal predictions had to be updated. We found that only intelligible speech produced sustained rhythmic responses in the cerebellum. During this "entrainment echo," but not during rhythmic speech itself, cerebellar activity was coupled with that in the left inferior frontal gyrus, and specifically at rates corresponding to the preceding stimulus rhythm. This finding represents evidence for specific cerebellum-driven temporal predictions in speech processing and their relay to cortical regions.
Asunto(s)
Cerebelo , Magnetoencefalografía , Humanos , Cerebelo/fisiología , Masculino , Femenino , Adulto , Percepción del Habla/fisiología , Adulto Joven , Habla/fisiología , Inteligibilidad del Habla/fisiologíaRESUMEN
The human brain is a constructive organ. It generates predictions to modulate its functioning and continuously adapts to a dynamic environment. Increasingly, the temporal dimension of motor and non-motor behaviour is recognised as a key component of this predictive bias. Nevertheless, the intricate interplay of the neural mechanisms that encode, decode and evaluate temporal information to give rise to a sense of time and control over sensorimotor timing remains largely elusive. Among several brain systems, the basal ganglia have been consistently linked to interval- and beat-based timing operations. Considering the tight embedding of the basal ganglia into multiple complex neurofunctional networks, it is clear that they have to interact with other proximate and distal brain systems. While the primary target of basal ganglia output is the thalamus, many regions connect to the striatum of the basal ganglia, their main input relay. This establishes widespread connectivity, forming the basis for first- and second-order interactions with other systems implicated in timing such as the cerebellum and supplementary motor areas. However, next to this structural interconnectivity, additional functions need to be considered to better understand their contribution to temporally predictive adaptation. To this end, we develop the concept of interval-based patterning, conceived as a temporally explicit hierarchical sequencing operation that underlies motor and non-motor behaviour as a common interpretation of basal ganglia function.
Asunto(s)
Ganglios Basales , Percepción del Tiempo , Humanos , Ganglios Basales/fisiología , Percepción del Tiempo/fisiología , Vías Nerviosas/fisiología , Animales , Tálamo/fisiología , Red Nerviosa/fisiologíaRESUMEN
Timing and rhythm abilities are complex and multidimensional skills that are highly widespread in the general population. This complexity can be partly captured by the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA). The battery, consisting of four perceptual and five sensorimotor tests (finger-tapping), has been used in healthy adults and in clinical populations (e.g., Parkinson's disease, ADHD, developmental dyslexia, stuttering), and shows sensitivity to individual differences and impairment. However, major limitations for the generalized use of this tool are the lack of reliable and standardized norms and of a version of the battery that can be used outside the lab. To circumvent these caveats, we put forward a new version of BAASTA on a tablet device capable of ensuring lab-equivalent measurements of timing and rhythm abilities. We present normative data obtained with this version of BAASTA from over 100 healthy adults between the ages of 18 and 87 years in a test-retest protocol. Moreover, we propose a new composite score to summarize beat-based rhythm capacities, the Beat Tracking Index (BTI), with close to excellent test-retest reliability. BTI derives from two BAASTA tests (beat alignment, paced tapping), and offers a swift and practical way of measuring rhythmic abilities when research imposes strong time constraints. This mobile BAASTA implementation is more inclusive and far-reaching, while opening new possibilities for reliable remote testing of rhythmic abilities by leveraging accessible and cost-efficient technologies.
Asunto(s)
Percepción Auditiva , Humanos , Adulto , Masculino , Persona de Mediana Edad , Femenino , Anciano , Adulto Joven , Percepción Auditiva/fisiología , Adolescente , Reproducibilidad de los Resultados , Anciano de 80 o más Años , Desempeño Psicomotor/fisiología , Percepción del Tiempo/fisiología , Aplicaciones MóvilesRESUMEN
During multisensory speech perception, slow δ oscillations (â¼1-3 Hz) in the listener's brain synchronize with the speech signal, likely engaging in speech signal decomposition. Notable fluctuations in the speech amplitude envelope, resounding speaker prosody, temporally align with articulatory and body gestures and both provide complementary sensations that temporally structure speech. Further, δ oscillations in the left motor cortex seem to align with speech and musical beats, suggesting their possible role in the temporal structuring of (quasi)-rhythmic stimulation. We extended the role of δ oscillations to audiovisual asynchrony detection as a test case of the temporal analysis of multisensory prosody fluctuations in speech. We recorded Electroencephalograph (EEG) responses in an audiovisual asynchrony detection task while participants watched videos of a speaker. We filtered the speech signal to remove verbal content and examined how visual and auditory prosodic features temporally (mis-)align. Results confirm (1) that participants accurately detected audiovisual asynchrony, and (2) increased δ power in the left motor cortex in response to audiovisual asynchrony. The difference of δ power between asynchronous and synchronous conditions predicted behavioral performance, and (3) decreased δ-ß coupling in the left motor cortex when listeners could not accurately map visual and auditory prosodies. Finally, both behavioral and neurophysiological evidence was altered when a speaker's face was degraded by a visual mask. Together, these findings suggest that motor δ oscillations support asynchrony detection of multisensory prosodic fluctuation in speech.SIGNIFICANCE STATEMENT Speech perception is facilitated by regular prosodic fluctuations that temporally structure the auditory signal. Auditory speech processing involves the left motor cortex and associated δ oscillations. However, visual prosody (i.e., a speaker's body movements) complements auditory prosody, and it is unclear how the brain temporally analyses different prosodic features in multisensory speech perception. We combined an audiovisual asynchrony detection task with electroencephalographic (EEG) recordings to investigate how δ oscillations support the temporal analysis of multisensory speech. Results confirmed that asynchrony detection of visual and auditory prosodies leads to increased δ power in left motor cortex and correlates with performance. We conclude that δ oscillations are invoked in an effort to resolve denoted temporal asynchrony in multisensory speech perception.
Asunto(s)
Percepción del Habla , Estimulación Acústica , Percepción Auditiva/fisiología , Electroencefalografía , Humanos , Estimulación Luminosa , Habla , Percepción del Habla/fisiología , Percepción Visual/fisiologíaRESUMEN
Several theories of predictive processing propose reduced sensory and neural responses to anticipated events. Support comes from magnetoencephalography/electroencephalography (M/EEG) studies, showing reduced auditory N1 and P2 responses to self-generated compared to externally generated events, or when the timing and form of stimuli are more predictable. The current study examined the sensitivity of N1 and P2 responses to statistical speech regularities. We employed a motor-to-auditory paradigm comparing event-related potential (ERP) responses to externally and self-triggered pseudowords. Participants were presented with a cue indicating which button to press (motor-auditory condition) or which pseudoword would be presented (auditory-only condition). Stimuli consisted of the participant's own voice uttering pseudowords that varied in phonotactic probability and syllable stress. We expected to see N1 and P2 suppression for self-triggered stimuli, with greater suppression effects for more predictable features such as high phonotactic probability and first-syllable stress in pseudowords. In a temporal principal component analysis (PCA), we observed an interaction between syllable stress and condition for the N1, where second-syllable stress items elicited a larger N1 than first-syllable stress items, but only for externally generated stimuli. We further observed an effect of syllable stress on the P2, where first-syllable stress items elicited a larger P2. Strikingly, we did not observe motor-induced suppression for self-triggered stimuli for either the N1 or P2 component, likely due to the temporal predictability of the stimulus onset in both conditions. Taking into account previous findings, the current results suggest that sensitivity to syllable stress regularities depends on task demands.
Asunto(s)
Potenciales Evocados Auditivos , Habla , Humanos , Potenciales Evocados Auditivos/fisiología , Estimulación Acústica/métodos , ElectroencefalografíaRESUMEN
In the absence of sensory stimulation, the brain transits between distinct functional networks. Network dynamics such as transition patterns and the time the brain stays in each network link to cognition and behavior and are subject to much investigation. Auditory verbal hallucinations (AVH), the temporally fluctuating unprovoked experience of hearing voices, are associated with aberrant resting state network activity. However, we lack a clear understanding of how different networks contribute to aberrant activity over time. An accurate characterization of latent network dynamics and their relation to neurocognitive changes necessitates methods that capture the sub-second temporal fluctuations of the networks' functional connectivity signatures. Here, we critically evaluate the assumptions and sensitivity of several approaches commonly used to assess temporal dynamics of brain connectivity states in M/EEG and fMRI research, highlighting methodological constraints and their clinical relevance to AVH. Identifying altered brain connectivity states linked to AVH can facilitate the detection of predictive disease markers and ultimately be valuable for generating individual risk profiles, differential diagnosis, targeted intervention, and treatment strategies.
Asunto(s)
Esquizofrenia , Encéfalo/diagnóstico por imagen , Mapeo Encefálico , Alucinaciones/diagnóstico por imagen , Humanos , Imagen por Resonancia MagnéticaRESUMEN
Stimuli that evoke emotions are salient, draw attentional resources, and facilitate situationally appropriate behavior in complex or conflicting environments. However, negative and positive emotions may motivate different response strategies. For example, a threatening stimulus might evoke avoidant behavior, whereas a positive stimulus may prompt approaching behavior. Therefore, emotional stimuli might either elicit differential behavioral responses when a conflict arises or simply mark salience. The present study used functional magnetic resonance imaging to investigate valence-specific emotion effects on attentional control in conflict processing by employing an adapted flanker task with neutral, negative, and positive stimuli. Slower responses were observed for incongruent than congruent trials. Neural activity in the dorsal anterior cingulate cortex was associated with conflict processing regardless of emotional stimulus quality. These findings confirm that both negative and positive emotional stimuli mark salience in both low (congruent) and high (incongruent) conflict scenarios. Regardless of the conflict level, emotional stimuli deployed greater attentional resources in goal directed behavior.
Asunto(s)
Conflicto Psicológico , Giro del Cíngulo , Humanos , Giro del Cíngulo/fisiología , Tiempo de Reacción/fisiología , Emociones/fisiología , Atención/fisiología , Imagen por Resonancia Magnética/métodosRESUMEN
BACKGROUND: Dissociative seizures (DS) are a common subtype of functional neurological disorder (FND) with an incompletely understood pathophysiology. Here, gray matter variations and their relationship to clinical features were investigated. METHODS: Forty-eight patients with DS without neurological comorbidities and 43 matched clinical control patients with syncope with structural brain MRIs were identified retrospectively. FreeSurfer-based cortical thickness and FSL FIRST-based subcortical volumes were used for quantitative analyses, and all findings were age and sex adjusted, and corrected for multiple comparisons. RESULTS: Groups were not statistically different in cortical thickness or subcortical volumes. For patients with DS, illness duration was inversely correlated with cortical thickness of left-sided anterior and posterior cortical midline structures (perigenual/dorsal anterior cingulate cortex, superior parietal cortex, precuneus), and clusters at the left temporoparietal junction (supramarginal gyrus, postcentral gyrus, superior temporal gyrus), left postcentral gyrus, and right pericalcarine cortex. Dissociative seizure duration was inversely correlated with cortical thickness in the left perigenual anterior cingulate cortex, superior/middle frontal gyri, precentral gyrus and lateral occipital cortex, along with the right isthmus-cingulate and posterior-cingulate, middle temporal gyrus, and precuneus. Seizure frequency did not show any significant correlations. CONCLUSIONS: In patients with DS, illness duration inversely correlated with cortical thickness of left-sided default mode network cortical hubs, while seizure duration correlated with left frontopolar and right posteromedial areas, among others. Etiological factors contributing to neuroanatomical variations in areas related to self-referential processing in patients with DS require more research inquiry.
Asunto(s)
Corteza Cerebral , Red en Modo Predeterminado , Trastornos Disociativos , Convulsiones , Corteza Cerebral/diagnóstico por imagen , Red en Modo Predeterminado/diagnóstico por imagen , Trastornos Disociativos/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Estudios Retrospectivos , Convulsiones/diagnóstico por imagenRESUMEN
Leading models of visual word recognition assume that the process of word identification is driven by abstract, case-invariant units (e.g., table and TABLE activate the same abstract representation). But do these models need to be modified to meet nuances of orthography as in German, where the first letter of common nouns is capitalized (e.g., Buch [book] and Hund [dog], but blau [blue])? To examine the role of initial capitalization of German words in lexical access, we chose a semantic categorization task ("is the word an animal name?"). In Experiment 1, we compared German words in all-lowercase vs. initial capitalization (hund, buch, blau vs. Hund, Buch, Blau). Results showed faster responses for animal nouns with initial capitalization (Hund < hund) and faster responses for lowercase non-nouns (blau < Blau). Surprisingly, we found faster responses for lowercase non-animal nouns (buch < Buch). As the latter difference could derive from task demands (i.e., buch does not follow German orthographic rules and requires a "no" response), we replaced the all-lowercase format with an orthographically legal all-uppercase format in Experiment 2. Results showed an advantage for all nouns with initial capitalization (Hund < HUND and Buch < BUCH). These findings clearly show that initial capitalization in German words constitutes an essential part of the words' representations and is used during lexical access. Thus, models of visual word recognition, primarily focused on English orthography, should be expanded to the idiosyncrasies of other Latin-based orthographies.
Asunto(s)
Lenguaje , Humanos , Nombres , Lectura , SemánticaRESUMEN
Introduction: Auditory verbal hallucinations (AVH) are a cardinal symptom of schizophrenia but are also reported in the general population without need for psychiatric care. Previous evidence suggests that AVH may reflect an imbalance of prior expectation and sensory information, and that altered salience processing is characteristic of both psychotic and non-clinical voice hearers. However, it remains to be shown how such an imbalance affects the categorisation of vocal emotions in perceptual ambiguity.Methods: Neutral and emotional nonverbal vocalisations were morphed along two continua differing in valence (anger; pleasure), each including 11 morphing steps at intervals of 10%. College students (N = 234) differing in AVH proneness (measured with the Launay-Slade Hallucination Scale) evaluated the emotional quality of the vocalisations.Results: Increased AVH proneness was associated with more frequent categorisation of ambiguous vocalisations as 'neutral', irrespective of valence. Similarly, the perceptual boundary for emotional classification was shifted by AVH proneness: participants needed more emotional information to categorise a voice as emotional.Conclusions: These findings suggest that emotional salience in vocalisations is dampened as a function of increased AVH proneness. This could be related to changes in the acoustic representations of emotions or reflect top-down expectations of less salient information in the social environment.
Asunto(s)
Esquizofrenia , Voz , Ira , Emociones , Alucinaciones/psicología , HumanosRESUMEN
Growing evidence shows that theta-band (4-7 Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: "This dress is lovely!") elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain's phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250-500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech.
Asunto(s)
Corteza Auditiva/fisiología , Electroencefalografía , Electrooculografía , Movimientos Oculares/fisiología , Procesos Mentales/fisiología , Lectura , Habla/fisiología , Ritmo Teta/fisiología , Adulto , Femenino , Humanos , Masculino , Factores de Tiempo , Adulto JovenRESUMEN
Vocal flexibility is a hallmark of the human species, most particularly the capacity to speak and sing. This ability is supported in part by the evolution of a direct neural pathway linking the motor cortex to the brainstem nucleus that controls the larynx the primary sound source for communication. Early brain imaging studies demonstrated that larynx motor cortex at the dorsal end of the orofacial division of motor cortex (dLMC) integrated laryngeal and respiratory control, thereby coordinating two major muscular systems that are necessary for vocalization. Neurosurgical studies have since demonstrated the existence of a second larynx motor area at the ventral extent of the orofacial motor division (vLMC) of motor cortex. The vLMC has been presumed to be less relevant to speech motor control, but its functional role remains unknown. We employed a novel ultra-high field (7T) magnetic resonance imaging paradigm that combined singing and whistling simple melodies to localise the larynx motor cortices and test their involvement in respiratory motor control. Surprisingly, whistling activated both 'larynx areas' more strongly than singing despite the reduced involvement of the larynx during whistling. We provide further evidence for the existence of two larynx motor areas in the human brain, and the first evidence that laryngeal-respiratory integration is a shared property of both larynx motor areas. We outline explicit predictions about the descending motor pathways that give these cortical areas access to both the laryngeal and respiratory systems and discuss the implications for the evolution of speech.
Asunto(s)
Laringe/fisiología , Imagen por Resonancia Magnética/métodos , Corteza Motora/fisiología , Vías Nerviosas/fisiología , Respiración , Habla/fisiología , Adulto , Femenino , Humanos , Análisis de los Mínimos Cuadrados , Masculino , Corteza Motora/diagnóstico por imagen , Mecánica Respiratoria/fisiología , Descanso/fisiología , Canto/fisiología , Adulto JovenRESUMEN
Self-voice attribution can become difficult when voice characteristics are ambiguous, but functional magnetic resonance imaging (fMRI) investigations of such ambiguity are sparse. We utilized voice-morphing (self-other) to manipulate (un-)certainty in self-voice attribution in a button-press paradigm. This allowed investigating how levels of self-voice certainty alter brain activation in brain regions monitoring voice identity and unexpected changes in voice playback quality. FMRI results confirmed a self-voice suppression effect in the right anterior superior temporal gyrus (aSTG) when self-voice attribution was unambiguous. Although the right inferior frontal gyrus (IFG) was more active during a self-generated compared to a passively heard voice, the putative role of this region in detecting unexpected self-voice changes during the action was demonstrated only when hearing the voice of another speaker and not when attribution was uncertain. Further research on the link between right aSTG and IFG is required and may establish a threshold monitoring voice identity in action. The current results have implications for a better understanding of the altered experience of self-voice feedback in auditory verbal hallucinations.
Asunto(s)
Voz , Encéfalo/diagnóstico por imagen , Mapeo Encefálico , Alucinaciones , Humanos , Imagen por Resonancia MagnéticaRESUMEN
The informative value of time and temporal structure often remains neglected in cognitive assessments. However, next to information about stimulus identity we can exploit temporal ordering principles, such as regularity, periodicity, or grouping to generate predictions about the timing of future events. Such predictions may improve cognitive performance by optimising adaptation to dynamic stimuli. Here, we investigated the influence of temporal structure on verbal working memory by assessing immediate recall performance for aurally presented digit sequences (forward digit span) as a function of standard (1000 ms stimulus-onset-asynchronies, SOAs), short (700 ms), long (1300 ms) and mixed (700-1300 ms) stimulus timing during the presentation phase. Participant's digit spans were lower for short and mixed SOA presentation relative to standard SOAs. This confirms an impact of temporal structure on the classic "magical number seven," suggesting that working memory performance can in part be regulated through the systematic application of temporal ordering principles.
Asunto(s)
Cognición/fisiología , Memoria a Corto Plazo/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto JovenRESUMEN
In a multi- and inter-cultural world, we daily encounter new words. Adult learners often rely on a situational context to learn and understand a new word's meaning. Here, we explored whether interactive learning facilitates word learning by directing the learner's attention to a correct new word referent when a situational context is non-informative. We predicted larger involvement of inferior parietal, frontal, and visual cortices involved in visuo-spatial attention during interactive learning. We scanned participants while they played a visual word learning game with and without a social partner. As hypothesized, interactive learning enhanced activity in the right Supramarginal Gyrus when the situational context provided little information. Activity in the right Inferior Frontal Gyrus during interactive learning correlated with post-scanning behavioral test scores, while these scores correlated with activity in the Fusiform Gyrus in the non-interactive group. These results indicate that attention is involved in interactive learning when the situational context is minimal and suggest that individual learning processes may be largely different from interactive ones. As such, they challenge the ecological validity of what we know about individual learning and advocate the exploration of interactive learning in naturalistic settings.
Asunto(s)
Atención/fisiología , Encéfalo/fisiología , Lenguaje , Aprendizaje Social/fisiología , Aprendizaje Verbal/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , MasculinoRESUMEN
Neural activity phase-locks to rhythm in both music and speech. However, the literature currently lacks a direct test of whether cortical tracking of comparable rhythmic structure is comparable across domains. Moreover, although musical training improves multiple aspects of music and speech perception, the relationship between musical training and cortical tracking of rhythm has not been compared directly across domains. We recorded the electroencephalograms (EEG) from 28 participants (14 female) with a range of musical training who listened to melodies and sentences with identical rhythmic structure. We compared cerebral-acoustic coherence (CACoh) between the EEG signal and single-trial stimulus envelopes (as measure of cortical entrainment) across domains and correlated years of musical training with CACoh. We hypothesized that neural activity would be comparably phase-locked across domains, and that the amount of musical training would be associated with increasingly strong phase locking in both domains. We found that participants with only a few years of musical training had a comparable cortical response to music and speech rhythm, partially supporting the hypothesis. However, the cortical response to music rhythm increased with years of musical training while the response to speech rhythm did not, leading to an overall greater cortical response to music rhythm across all participants. We suggest that task demands shaped the asymmetric cortical tracking across domains.
Asunto(s)
Corteza Cerebral/fisiología , Música , Percepción de la Altura Tonal/fisiología , Percepción del Habla/fisiología , Adulto , Mapeo Encefálico/métodos , Electroencefalografía/métodos , Femenino , Humanos , Masculino , Adulto JovenRESUMEN
It is widely accepted that unexpected sensory consequences of self-action engage the cerebellum. However, we currently lack consensus on where in the cerebellum, we find fine-grained differentiation to unexpected sensory feedback. This may result from methodological diversity in task-based human neuroimaging studies that experimentally alter the quality of self-generated sensory feedback. We gathered existing studies that manipulated sensory feedback using a variety of methodological approaches and performed activation likelihood estimation (ALE) meta-analyses. Only half of these studies reported cerebellar activation with considerable variation in spatial location. Consequently, ALE analyses did not reveal significantly increased likelihood of activation in the cerebellum despite the broad scientific consensus of the cerebellum's involvement. In light of the high degree of methodological variability in published studies, we tested for statistical dependence between methodological factors that varied across the published studies. Experiments that elicited an adaptive response to continuously altered sensory feedback more frequently reported activation in the cerebellum than those experiments that did not induce adaptation. These findings may explain the surprisingly low rate of significant cerebellar activation across brain imaging studies investigating unexpected sensory feedback. Furthermore, limitations of functional magnetic resonance imaging to probe the cerebellum could play a role as climbing fiber activity associated with feedback error processing may not be captured by it. We provide methodological recommendations that may guide future studies.
Asunto(s)
Adaptación Fisiológica/fisiología , Cerebelo/fisiología , Corteza Cerebral/fisiología , Retroalimentación Sensorial/fisiología , Neuroimagen Funcional , Imagen por Resonancia Magnética , Corteza Cerebral/diagnóstico por imagen , HumanosRESUMEN
Most human communication is carried by modulations of the voice. However, a wide range of cultures has developed alternative forms of communication that make use of a whistled sound source. For example, whistling is used as a highly salient signal for capturing attention, and can have iconic cultural meanings such as the catcall, enact a formal code as in boatswain's calls or stand as a proxy for speech in whistled languages. We used real-time magnetic resonance imaging to examine the muscular control of whistling to describe a strong association between the shape of the tongue and the whistled frequency. This bioacoustic profile parallels the use of the tongue in vowel production. This is consistent with the role of whistled languages as proxies for spoken languages, in which one of the acoustical features of speech sounds is substituted with a frequency-modulated whistle. Furthermore, previous evidence that non-human apes may be capable of learning to whistle from humans suggests that these animals may have similar sensorimotor abilities to those that are used to support speech in humans.
Asunto(s)
Imagen por Resonancia Magnética , Canto , Habla , Acústica , Humanos , LenguaRESUMEN
Introduction: Auditory verbal hallucinations (AVH) are a core symptom of psychotic disorders such as schizophrenia but are also reported in 10-15% of the general population. Impairments in self-voice recognition are frequently reported in schizophrenia and associated with the severity of AVH, particularly when the self-voice has a negative quality. However, whether self-voice processing is also affected in nonclinical voice hearers remains to be specified. Methods: Thirty-five nonclinical participants varying in hallucination predisposition based on the Launay-Slade Hallucination Scale, listened to prerecorded words and vocalisations differing in identity (self/other) and emotional quality. In Experiment 1, participants indicated whether words were spoken in their own voice, another voice, or whether they were unsure (recognition task). They were also asked whether pairs of words/vocalisations were uttered by the same or by a different speaker (discrimination task). In Experiment 2, participants judged the emotional quality of the words/vocalisations. Results: In Experiment 1, hallucination predisposition affected voice discrimination and recognition, irrespective of stimulus valence. Hallucination predisposition did not affect the evaluation of the emotional valence of words/vocalisations (Experiment 2). Conclusions: These findings suggest that nonclinical participants with high HP experience altered voice identity processing, whereas HP does not affect the perception of vocal emotion. Specific alterations in self-voice perception in clinical and nonclinical voice hearers may establish a core feature of the psychosis continuum.