RESUMO
Objective: To investigate an alternative approach to family participatory nursing in neonatal intensive care units (NICUs) during the COVID-19 pandemic, focusing on auditory interventions to mitigate the effects of maternal separation (MS) on neonatal neurological development. Methods: This study was a randomized, double-blind, prospective trial involving 100 newborns younger than 6 months old, born between January 2022 and October 2022, who experienced MS for more than 2 weeks. Newborns were randomly allocated into control and study groups using a computer-generated list to ensure unbiased selection. Inclusion criteria were gestational age ≥37 weeks and admission to NICU due to various medical conditions; exclusion criteria included severe hearing impairment and congenital neurological disorders. The intervention group received maternal voice exposure at 40-50 dB for eight 30-minute sessions daily, while the control group was exposed to children's songs at the same volume and duration. Key metrics such as oxygen saturation, heart rate, Neonatal Infant Pain Scale (NIPS) scores, and Neonatal Behavioral Neurological Assessment (NBNA) scores were measured before and after the intervention period, which lasted one week. Results: Post-intervention, the NIPS scores in the intervention group were significantly lower (3.45±0.99) compared to the control group (5.36±0.49, P < .01), indicating reduced pain sensitivity. Additionally, NBNA scores were higher in the intervention group (39.90±1.56) than in the control group (35.86±1.05, P < .01), suggesting enhanced neurological development. No significant difference in pre-intervention blood oxygen saturation levels was observed between the groups. However, the intervention group showed less reduction in oxygen saturation during and post-blood collection, with significantly higher levels at 2, 4, and 6 minutes post-procedure (P < .01). The findings underscore the significance of maternal voice as a non-pharmacological intervention to alleviate pain and foster neurological development in neonates facing MS, especially in situations where traditional family participatory nursing is hindered by the COVID-19 pandemic. Integrating maternal voice stimulation into neonatal care strategies offers a viable method to improve outcomes for newborns undergoing MS. Conclusion: Maternal voice intervention presents a promising strategy to diminish pain sensitivity and bolster neurological development in neonates separated from their mothers, particularly when family participatory nursing practices are constrained by pandemic-related restrictions. These findings advocate for the broader implementation of maternal voice stimulation in NICU settings.
Assuntos
COVID-19 , Humanos , Recém-Nascido , COVID-19/prevenção & controle , COVID-19/epidemiologia , Feminino , Masculino , Método Duplo-Cego , Voz/fisiologia , Estudos Prospectivos , Mães/psicologia , SARS-CoV-2 , Neurônios Motores/fisiologia , Unidades de Terapia Intensiva Neonatal , Adulto , LactenteRESUMO
OBJECTIVE: This study aimed to assess the effects of surface electrical stimulation plus voice therapy on voice in dysphonic patients with idiopathic Parkinson's disease. METHOD: Patients were assigned to 3 treatment groups (n = 28 per group) and received daily treatment for 3 weeks on 5 days a week. All three groups received voice therapy (usual care). In addition, two groups received surface electrical stimulation, either motor-level or sensory-level stimulation. A standardised measurement protocol to evaluate therapeutic effects included the Voice Handicap Index and videolaryngostroboscopy. RESULTS: Voice Handicap Index and videolaryngostroboscopic assessment showed statistically significant differences between baseline and post-treatment across all groups, without any post-treatment differences between the three groups. CONCLUSION: Intensive voice therapy (usual care) improved idiopathic Parkinson's disease patients' self-assessment of voice impairment and the videolaryngostroboscopic outcome score. However, surface electrical stimulation used as an add-on to usual care did not improve idiopathic Parkinson's disease patients' self-assessment of voice impairment or the videolaryngostroboscopic outcome scores any further.
Assuntos
Terapia por Estimulação Elétrica , Doença de Parkinson , Distúrbios da Voz , Voz , Humanos , Doença de Parkinson/complicações , Doença de Parkinson/terapia , Voz/fisiologia , Distúrbios da Voz/etiologia , Distúrbios da Voz/terapia , Estimulação Elétrica , Resultado do TratamentoRESUMO
The recognition of human speakers by their voices is a remarkable cognitive ability. Previous research has established a voice area in the right temporal cortex involved in the integration of speaker-specific acoustic features. This integration appears to occur rapidly, especially in case of familiar voices. However, the exact time course of this process is less well understood. To this end, we here investigated the automatic change detection response of the human brain while listening to the famous voice of German chancellor Angela Merkel, embedded in the context of acoustically matched voices. A classic passive oddball paradigm contrasted short word stimuli uttered by Merkel with word stimuli uttered by two unfamiliar female speakers. Electrophysiological voice processing indices from 21 participants were quantified as mismatch negativities (MMNs) and P3a differences. Cortical sources were approximated by variable resolution electromagnetic tomography. The results showed amplitude and latency effects for both MMN and P3a: The famous (familiar) voice elicited a smaller but earlier MMN than the unfamiliar voices. The P3a, by contrast, was both larger and later for the familiar than for the unfamiliar voices. Familiar-voice MMNs originated from right-hemispheric regions in temporal cortex, overlapping with the temporal voice area, while unfamiliar-voice MMNs stemmed from left superior temporal gyrus. These results suggest that the processing of a very famous voice relies on pre-attentive right temporal processing within the first 150 ms of the acoustic signal. The findings further our understanding of the neural dynamics underlying familiar voice processing.
Assuntos
Voz , Estimulação Acústica , Atenção , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Reconhecimento Psicológico/fisiologia , Voz/fisiologiaRESUMO
A growing number of functional neuroimaging studies have identified regions within the temporal lobe, particularly along the planum polare and planum temporale, that respond more strongly to music than other types of acoustic stimuli, including voice. This "music preferred" regions have been reported using a variety of stimulus sets, paradigms and analysis approaches and their consistency across studies confirmed through meta-analyses. However, the critical question of intra-subject reliability of these responses has received less attention. Here, we directly assessed this important issue by contrasting brain responses to musical vs. vocal stimuli in the same subjects across three consecutive fMRI runs, using different types of stimuli. Moreover, we investigated whether these music- and voice-preferred responses were reliably modulated by expertise. Results demonstrated that music-preferred activity previously reported in temporal regions, and its modulation by expertise, exhibits a high intra-subject reliability. However, we also found that activity in some extra-temporal regions, such as the precentral and middle frontal gyri, did depend on the particular stimuli employed, which may explain why these are less consistently reported in the literature. Taken together, our findings confirm and extend the notion that specific regions in the brain consistently respond more strongly to certain socially-relevant stimulus categories, such as faces, voices and music, but that some of these responses appear to depend, at least to some extent, on the specific features of the paradigm employed.
Assuntos
Música , Voz , Estimulação Acústica , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Mapeamento Encefálico , Humanos , Imageamento por Ressonância Magnética , Reprodutibilidade dos Testes , Voz/fisiologiaRESUMO
Cross-modal integration is ubiquitous within perception and, in humans, the McGurk effect demonstrates that seeing a person articulating speech can change what we hear into a new auditory percept. It remains unclear whether cross-modal integration of sight and sound generalizes to other visible vocal articulations like those made by singers. We surmise that perceptual integrative effects should involve music deeply, since there is ample indeterminacy and variability in its auditory signals. We show that switching videos of sung musical intervals changes systematically the estimated distance between two notes of a musical interval so that pairing the video of a smaller sung interval to a relatively larger auditory led to compression effects on rated intervals, whereas the reverse led to a stretching effect. In addition, after seeing a visually switched video of an equally-tempered sung interval and then hearing the same interval played on the piano, the two intervals were judged often different though they differed only in instrument. These findings reveal spontaneous, cross-modal, integration of vocal sounds and clearly indicate that strong integration of sound and sight can occur beyond the articulations of natural speech.
Assuntos
Percepção Auditiva/fisiologia , Músculos Faciais/fisiologia , Movimento/fisiologia , Música/psicologia , Canto/fisiologia , Som , Voz/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Feminino , Audição/fisiologia , Humanos , Masculino , Fala , Percepção da Fala/fisiologia , Estudantes/psicologia , Adulto JovemRESUMO
Until recently, brain networks underlying emotional voice prosody decoding and processing were focused on modulations in primary and secondary auditory, ventral frontal and prefrontal cortices, and the amygdala. Growing interest for a specific role of the basal ganglia and cerebellum was recently brought into the spotlight. In the present study, we aimed at characterizing the role of such subcortical brain regions in vocal emotion processing, at the level of both brain activation and functional and effective connectivity, using high resolution functional magnetic resonance imaging. Variance explained by low-level acoustic parameters (fundamental frequency, voice energy) was also modelled. Wholebrain data revealed expected contributions of the temporal and frontal cortices, basal ganglia and cerebellum to vocal emotion processing, while functional connectivity analyses highlighted correlations between basal ganglia and cerebellum, especially for angry voices. Seed-to-seed and seed-to-voxel effective connectivity revealed direct connections within the basal ganglia-especially between the putamen and external globus pallidus-and between the subthalamic nucleus and the cerebellum. Our results speak in favour of crucial contributions of the basal ganglia, especially the putamen, external globus pallidus and subthalamic nucleus, and several cerebellar lobules and nuclei for an efficient decoding of and response to vocal emotions.
Assuntos
Gânglios da Base/diagnóstico por imagem , Cerebelo/diagnóstico por imagem , Emoções/fisiologia , Imageamento por Ressonância Magnética , Voz/fisiologia , Estimulação Acústica , Acústica , Adulto , Feminino , Humanos , Masculino , Rede Nervosa/fisiologiaRESUMO
The factors affecting the penetration of certain diseases such as COVID-19 in society are still unknown. Internet of Things (IoT) technologies can play a crucial role during the time of crisis and they can provide a more holistic view of the reasons that govern the outbreak of a contagious disease. The understanding of COVID-19 will be enriched by the analysis of data related to the phenomena, and this data can be collected using IoT sensors. In this paper, we show an integrated solution based on IoT technologies that can serve as opportunistic health data acquisition agents for combating the pandemic of COVID-19, named CIoTVID. The platform is composed of four layers-data acquisition, data aggregation, machine intelligence and services, within the solution. To demonstrate its validity, the solution has been tested with a use case based on creating a classifier of medical conditions using real data of voice, performing successfully. The layer of data aggregation is particularly relevant in this kind of solution as the data coming from medical devices has a very different nature to that coming from electronic sensors. Due to the adaptability of the platform to heterogeneous data and volumes of data; individuals, policymakers, and clinics could benefit from it to fight the propagation of the pandemic.
Assuntos
COVID-19 , Internet das Coisas , Processamento de Sinais Assistido por Computador , Inteligência Artificial , COVID-19/complicações , COVID-19/diagnóstico , COVID-19/fisiopatologia , Humanos , Oximetria , Pandemias , SARS-CoV-2 , Espectrografia do Som/métodos , Voz/fisiologiaRESUMO
Human voice pitch is highly sexually dimorphic and eminently quantifiable, making it an ideal phenotype for studying the influence of sexual selection. In both traditional and industrial populations, lower pitch in men predicts mating success, reproductive success, and social status and shapes social perceptions, especially those related to physical formidability. Due to practical and ethical constraints however, scant evidence tests the central question of whether male voice pitch and other acoustic measures indicate actual fighting ability in humans. To address this, we examined pitch, pitch variability, and formant position of 475 mixed martial arts (MMA) fighters from an elite fighting league, with each fighter's acoustic measures assessed from multiple voice recordings extracted from audio or video interviews available online (YouTube, Google Video, podcasts), totaling 1312 voice recording samples. In four regression models each predicting a separate measure of fighting ability (win percentages, number of fights, Elo ratings, and retirement status), no acoustic measure significantly predicted fighting ability above and beyond covariates. However, after fight statistics, fight history, height, weight, and age were used to extract underlying dimensions of fighting ability via factor analysis, pitch and formant position negatively predicted "Fighting Experience" and "Size" factor scores in a multivariate regression model, explaining 3-8% of the variance. Our findings suggest that lower male pitch and formants may be valid cues of some components of fighting ability in men.
Assuntos
Agressão/fisiologia , Voz/fisiologia , Acústica , Adulto , Agressão/psicologia , Antropometria , Atletas/psicologia , Biomarcadores , Sinais (Psicologia) , Humanos , Masculino , Artes Marciais/fisiologia , Fenótipo , Discriminação da Altura Tonal/fisiologia , Comportamento Sexual/fisiologia , Comportamento Sexual/psicologia , Percepção Social/psicologiaRESUMO
RESUMO. O artigo parte de uma experiência de pesquisa sobre o tratamento psicanalítico em grupo de crianças autistas, para refletir acerca da presença de mais de um analista no setting. A interação entre os analistas favorece uma abordagem não diretiva, permitindo à criança que ela se aproxime espontaneamente, sem ser forçada a um contato que pode ser sentido como extremamente angustiante pelo autista. Ademais, constatam-se os efeitos da voz como suporte para os próprios interventores, que dialogam, brincam e cantam entre eles, suscitando uma animação libidinal capaz de mobilizar a criança autista. A música que circula nas brincadeiras de roda veicula tanto aspectos simbólicos da cultura quanto o real do gozo de alíngua. Em um caso particular, a prosódia do canto mostrou-se uma forma imaginária específica de tratar a dimensão real da voz que invade o sujeito autista. Servindo-se de canções populares como objetos de mediação, foi possível orientar o tratamento a partir de uma solução que veio do próprio sujeito, que, antecipado neste ato, pode ouvir a invocação para advir. Evidencia-se assim, também no autismo, o papel do objeto pulsional voz para a constituição subjetiva.
RESUMEN. El artículo parte de una experiencia de investigación sobre el tratamiento psicoanalítico en grupo de niños autistas, para pensar se a propósito de la presencia de más de un analista en el setting. La interacción entre analistas favorece un abordaje no directivo, permitiendo que el niño se acerque espontáneamente, sin ser forzado a un contacto que puede ser extremadamente angustiante para el autista. Además, se constatan los efectos de la voz como soporte para los propios interventores, que dialogan, juegan y cantan entre sí, evocando una animación liminal capaz de movilizar el niño autista. La música que circula en las cirandas transmite tanto los aspectos simbólicos de la cultura cuanto el real del goce de la lengua. En un caso particular, la prosodia del canto se ha mostrado una forma imaginaria específica de tratar la dimensión real de la voz que invade el sujeto autista. Haciendo uso de canciones populares como objetos de mediación, fue posible guiar el tratamiento a partir de una solución proveniente del propio sujeto, que, anticipado en este acto, puede escuchar la invocación para venir a ser. Se evidencia así, también en el autismo, el papel del objeto pulsional voz para la constitución subjetiva.
ABSTRACT. Based on a research experience regarding the psychoanalytical group treatment of autistic children, the article reflects on the presence of more than one analyst in the setting. The interaction between analysts favors a non-directive approach, enabling the child to take action spontaneously, without being forced to a contact that can be extremely unsettling for the autist. Furthermore, one observes the effects of voice as a support for the caretakers, who talk, play and sing with each other, eliciting a libidinal excitement able to involve the autistic child. The music that reverberates through the circle games transmits both the symbolic aspects of culture and the real of la langue's jouissance. In a particular case, the singing prosody revealed itself to be a specific imaginary way to treat the real of voice which invades the autistic subject. Using folk songs as mediation objects, it was possible to conceive a treatment direction that took into account a solution that sprung from the subject, who, anticipated in this act, can hear the invocation to arise. Thus, one evidences, also in autism, the role of voice as pulsional object in the subjective constitution.
Assuntos
Humanos , Masculino , Feminino , Pré-Escolar , Transtorno Autístico/psicologia , Voz/fisiologia , Música/psicologia , Jogos e Brinquedos/psicologia , Psicologia/métodos , Transtornos Psicóticos/psicologia , Cuidado da Criança/psicologia , Saúde Mental/educação , Canto/fisiologia , Transtorno do Espectro Autista/psicologia , Sistemas de Apoio Psicossocial , Idioma , Serviços de Saúde MentalRESUMO
Este artículo de reflexión aborda aspectos que dan cuenta de la complejidad en el planteamiento de objetivos en torno a los marcos de abordaje vocal contemporáneos. Se plantea la complejidad de la selección y redacción de objetivos para la interven-ción en voz holística y ecléctica, y desde ella, la necesidad de incorporar el modelo CIF y las recomendaciones de la ASHA para el desarrollo de objetivos centrados en la persona, tanto a corto como a largo plazo. Se propone la utilización del método de análisis SMART y su aplicación específica para objetivos de intervención de la voz. Además, se abordan los aspectos formales que se deben considerar para una redacción precisa. Finalmente, se ejemplifica la propuesta mediante un caso clínico. Esta propuesta pretende ser de utilidad para fines terapéuticos y/o para el ámbito académico, tanto en la discusión de la formulación y diseño de planes terapéuticos como en el pensamiento reflexivo asociado al abordaje vocal.
This reflective article addresses aspects that deal with the complexity of objective setting in contemporary vocal approach frameworks. It addresses the complexity in selecting and writing objectives for holistic and eclectic voice therapy and the need to incorporate the ICF model and ASHA recommendations for the development of person-centered goals in both the short and long term. The use of the SMART analysis method is proposed and its specific application for voice therapy goal. Also, the formal aspects to be considered for precise wording are addressed. Finally, the proposal is exemplified through a clinical case. This proposal is intended to be useful for therapeutic and/or academic purposes, both in discussing the formula-tion and design of therapeutic plans and the reflective thinking associated with the vocal approach.
Assuntos
Voz/fisiologia , Distúrbios da Voz/diagnóstico , Disfonia/reabilitação , Fonação/fisiologia , Terapêutica , Treinamento da Voz , Distúrbios da Voz , Classificação Internacional de Funcionalidade, Incapacidade e Saúde , DisfoniaRESUMO
Spasmodic dysphonia (SD) is characterized by an involuntary laryngeal muscle spasm during vocalization. Previous studies measured brain activation during voice production and suggested that SD arises from abnormal sensorimotor integration involving the sensorimotor cortex. However, it remains unclear whether this abnormal sensorimotor activation merely reflects neural activation produced by abnormal vocalization. To identify the specific neural correlates of SD, we used a sound discrimination task without overt vocalization to compare neural activation between 11 patients with SD and healthy participants. Participants underwent functional MRI during a two-alternative judgment task for auditory stimuli, which could be modal or falsetto voice. Since vocalization in falsetto is intact in SD, we predicted that neural activation during speech perception would differ between the two groups only for modal voice and not for falsetto voice. Group-by-stimulus interaction was observed in the left sensorimotor cortex and thalamus, suggesting that voice perception activates different neural systems between the two groups. Moreover, the sensorimotor signals positively correlated with disease severity of SD, and classified the two groups with 73% accuracy in linear discriminant analysis. Thus, the sensorimotor cortex and thalamus play a central role in SD pathophysiology and sensorimotor signals can be a new biomarker for SD diagnosis.
Assuntos
Percepção Auditiva/fisiologia , Disfonia/diagnóstico , Disfonia/psicologia , Córtex Sensório-Motor/fisiopatologia , Percepção da Fala/fisiologia , Voz/fisiologia , Estimulação Acústica , Adolescente , Adulto , Idoso , Biomarcadores , Criança , Disfonia/fisiopatologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Córtex Sensório-Motor/diagnóstico por imagem , Adulto JovemRESUMO
From a baby's cry to a piece of music, we perceive emotions from our auditory environment every day. Many theories bring forward the concept of common neural substrates for the perception of vocal and musical emotions. It has been proposed that, for us to perceive emotions, music recruits emotional circuits that evolved for the processing of biologically relevant vocalizations (e.g., screams, laughs). Although some studies have found similarities between voice and instrumental music in terms of acoustic cues and neural correlates, little is known about their processing timecourse. To further understand how vocal and instrumental emotional sounds are perceived, we used EEG to compare the neural processing timecourse of both stimuli type expressed with a varying degree of complexity (vocal/musical affect bursts and emotion-embedded speech/music). Vocal stimuli in general, as well as musical/vocal bursts, were associated with a more concise sensory trace at initial stages of analysis (smaller N1), although vocal bursts had shorter latencies than the musical ones. As for the P2 - vocal affect bursts and Emotion-Embedded Musical stimuli were associated with earlier P2s. These results support the idea that emotional vocal stimuli are differentiated early from other sources and provide insight into the common neurobiological underpinnings of auditory emotions.
Assuntos
Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Emoções/fisiologia , Música/psicologia , Tempo de Reação/fisiologia , Voz/fisiologia , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Fatores de Tempo , Adulto JovemRESUMO
Modulation of auditory activity occurs before and during voluntary speech movement. However, it is unknown whether orofacial somatosensory input is modulated in the same manner. The current study examined whether or not the somatosensory event-related potentials (ERPs) in response to facial skin stretch are changed during speech and nonspeech production tasks. Specifically, we compared ERP changes to somatosensory stimulation for different orofacial postures and speech utterances. Participants produced three different vowel sounds (voicing) or non-speech oral tasks in which participants maintained a similar posture without voicing. ERP's were recorded from 64 scalp sites in response to the somatosensory stimulation under six task conditions (three vowels × voicing/posture) and compared to a resting baseline condition. The first negative peak for the vowel /u/ was reliably reduced from the baseline in both the voicing and posturing tasks, but the other conditions did not differ. The second positive peak was reduced for all voicing tasks compared to the posturing tasks. The results suggest that the sensitivity of somatosensory ERP to facial skin deformation is modulated by the task and that somatosensory processing during speaking may be modulated differently relative to phonetic identity.
Assuntos
Potenciais Evocados/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Voz/fisiologia , Estimulação Acústica/métodos , Eletroencefalografia/métodos , Humanos , Fonética , Córtex Somatossensorial/fisiologiaRESUMO
Children maintain fluent speech despite dramatic changes to their articulators during development. Auditory feedback aids in the acquisition and maintenance of the sensorimotor mechanisms that underlie vocal motor control. MacDonald, Johnson, Forsythe, Plante, and Munhall (2012) reported that toddlers' speech motor control systems may "suppress" the influence of auditory feedback, since exposure to altered auditory feedback regarding their formant frequencies did not lead to modifications of their speech. This finding is not parsimonious with most theories of motor control. Here, we exposed toddlers to perturbations to the pitch of their auditory feedback as they vocalized. Toddlers compensated for the manipulations, producing significantly different responses to upward and downward perturbations. These data represent the first empirical demonstration that toddlers use auditory feedback for vocal motor control. Furthermore, our findings suggest toddlers are more sensitive to changes to the postural properties of their auditory feedback, such as fundamental frequency, relative to the phonemic properties, such as formant frequencies. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Assuntos
Fala/fisiologia , Voz/fisiologia , Estimulação Acústica , Pré-Escolar , Retroalimentação Sensorial/fisiologia , Feminino , Humanos , Masculino , Percepção da Fala/fisiologiaRESUMO
Previous work pointed to the neural and functional significance of infraslow neural oscillations below 1 âHz that can be detected and precisely located with fast functional magnetic resonance imaging (fMRI). While previous work demonstrated this significance for brain dynamics during very low-level sensory stimulation, we here provide the first evidence for the detectability and functional significance of infraslow oscillatory blood oxygenation level-dependent (BOLD) responses to auditory stimulation by the sociobiological relevant and more complex category of voices. Previous work pointed to a specific area of the mammalian auditory cortex (AC) that is sensitive to vocal signals as quantified by activation levels. Here we show, by using fast fMRI, that the human voice-sensitive AC prioritizes vocal signals not only in terms of activity level but also in terms of specific infraslow BOLD oscillations. We found unique sustained and transient oscillatory BOLD patterns in the AC for vocal signals. For transient oscillatory patterns, vocal signals showed faster peak oscillatory responses across all AC regions. Furthermore, we identified an exclusive sustained oscillatory component for vocal signals in the primary AC. Fast fMRI thus demonstrates the significance and richness of infraslow BOLD oscillations for neurocognitive mechanisms in social cognition as demonstrated here for the sociobiological relevance of voice processing.
Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Voz/fisiologia , Estimulação Acústica/métodos , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto JovemRESUMO
A fundamental question regarding music processing is its degree of independence from speech processing, in terms of their underlying neuroanatomy and influence of cognitive traits and abilities. Although a straight answer to that question is still lacking, a large number of studies have described where in the brain and in which contexts (tasks, stimuli, populations) this independence is, or is not, observed. We examined the independence between music and speech processing using functional magnetic resonance imagining and a stimulation paradigm with different human vocal sounds produced by the same voice. The stimuli were grouped as Speech (spoken sentences), Hum (hummed melodies), and Song (sung sentences); the sentences used in Speech and Song categories were the same, as well as the melodies used in the two musical categories. Each category had a scrambled counterpart which allowed us to render speech and melody unintelligible, while preserving global amplitude and frequency characteristics. Finally, we included a group of musicians to evaluate the influence of musical expertise. Similar global patterns of cortical activity were related to all sound categories compared to baseline, but important differences were evident. Regions more sensitive to musical sounds were located bilaterally in the anterior and posterior superior temporal gyrus (planum polare and temporale), the right supplementary and premotor areas, and the inferior frontal gyrus. However, only temporal areas and supplementary motor cortex remained music-selective after subtracting brain activity related to the scrambled stimuli. Speech-selective regions mainly affected by intelligibility of the stimuli were observed on the left pars opecularis and the anterior portion of the medial temporal gyrus. We did not find differences between musicians and non-musicians Our results confirmed music-selective cortical regions in associative cortices, independent of previous musical training.
Assuntos
Córtex Motor/fisiologia , Música , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Percepção Auditiva/fisiologia , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Córtex Motor/diagnóstico por imagem , Voz/fisiologiaRESUMO
Emotional sounds are processed within a large cortico-subcortical network, of which the auditory cortex, the voice area, and the amygdala are the core regions. Using 7T fMRI, we have compared the effect of emotional valence (positive, neutral, and negative) and the effect of the type of environmental sounds (human vocalizations and non-vocalizations) on neural activity within individual early stage auditory areas, the voice area, and the amygdala. A two-way ANOVA was applied to the BOLD time course within each ROI. In several early stage auditory areas, it yielded a significant main effect of vocalizations and of valence, but not a significant interaction. Significant interaction as well as significant main effects of vocalization and of valence were present in the voice area; the former was driven by a significant emotional modulation of vocalizations but not of other sounds. Within the amygdala, only the main effect of valence was significant. Post-hoc correlation analysis highlighted coupling between the voice area and early stage auditory areas during the presentation of any vocalizations, and between the voice area and the right amygdala during positive vocalizations. Thus, the voice area is selectively devoted to the encoding of the emotional valence of vocalizations; it shares with several early stage auditory areas encoding characteristics for vocalizations and with the amygdala for the emotional modulation of vocalizations. These results are indicative of a dual pathway, whereby the emotional modulation of vocalizations within the voice area integrates the input from the lateral early stage auditory areas and from the amygdala.
Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Emoções/fisiologia , Voz/fisiologia , Estimulação Acústica/métodos , Adulto , Tonsila do Cerebelo/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , MasculinoRESUMO
Recent studies have demonstrated the effectiveness of the voice for communicating sonic ideas, and the accuracy with which it can be used to imitate acoustic instruments, synthesised sounds and environmental sounds. However, there has been little research on vocal imitation of percussion sounds, particularly concerning the perceptual similarity between imitations and the sounds being imitated. In the present study we address this by investigating how accurately musicians can vocally imitate percussion sounds, in terms of whether listeners consider the imitations 'more similar' to the imitated sounds than to other same-category sounds. In a vocal production task, 14 musicians imitated 30 drum sounds from five categories (cymbals, hats, kicks, snares, toms). Listeners were then asked to rate the similarity between the imitations and same-category drum sounds via web based listening test. We found that imitated sounds received the highest similarity ratings for 16 of the 30 sounds. The similarity between a given drum sound and its imitation was generally rated higher than for imitations of another same-category sound, however for some drum categories (snares and toms) certain sounds were consistently considered most similar to the imitations, irrespective of the sound being imitated. Finally, we apply an existing auditory image based measure for perceptual similarity between same-category drum sounds, to model the similarity ratings using linear mixed effect regression. The results indicate that this measure is a good predictor of perceptual similarity between imitations and imitated sounds, when compared to acoustic features containing only temporal or spectral features.
Assuntos
Estimulação Acústica/métodos , Comportamento Imitativo/fisiologia , Voz/fisiologia , Acústica , Adulto , Percepção Auditiva , Feminino , Humanos , Masculino , Música , Percussão , SomRESUMO
Human listeners are able to recognize accurately an impressive range of complex sounds, such as musical instruments or voices. The underlying mechanisms are still poorly understood. Here, we aimed to characterize the processing time needed to recognize a natural sound. To do so, by analogy with the "rapid visual sequential presentation paradigm", we embedded short target sounds within rapid sequences of distractor sounds. The core hypothesis is that any correct report of the target implies that sufficient processing for recognition had been completed before the time of occurrence of the subsequent distractor sound. We conducted four behavioral experiments using short natural sounds (voices and instruments) as targets or distractors. We report the effects on performance, as measured by the fastest presentation rate for recognition, of sound duration, number of sounds in a sequence, the relative pitch between target and distractors and target position in the sequence. Results showed a very rapid auditory recognition of natural sounds in all cases. Targets could be recognized at rates up to 30 sounds per second. In addition, the best performance was observed for voices in sequences of instruments. These results give new insights about the remarkable efficiency of timbre processing in humans, using an original behavioral paradigm to provide strong constraints on future neural models of sound recognition.
Assuntos
Percepção Auditiva/fisiologia , Percepção da Altura Sonora/fisiologia , Psicoacústica , Voz/fisiologia , Estimulação Acústica , Córtex Cerebelar/fisiologia , Feminino , Humanos , Masculino , Música , Discriminação da Altura Tonal/fisiologia , Reconhecimento Psicológico/fisiologia , Som , Adulto JovemRESUMO
PURPOSE: To determine the incidence and spontaneous recovery rate of idiopathic vocal fold paralysis (IVFP) and paresis (IVFp), and the impact of steroid treatment on rates of recovery. METHODS: This retrospective cohort study included all patients with IVFP or IVFp within a large integrated health-care system between January 1, 2008 and December 31, 2014. Patient demographics and clinical characteristics, including time to diagnosis, spontaneous recovery status, time to recovery, and treatment, were examined. RESULTS: A total of 264 patients were identified, 183 (69.3%) with IVFP and 81 (30.7%) with IVFp. Nearly all cases (96.6%) were unilateral and 89.8% of patients were over the age of 45. The combined (IVFP and IVFp) 7-year mean incidence was 1.04 cases per 100,000 persons each year with the highest 7-year mean annual incidence in white patients (1.60 per 100,000). The total rate of spontaneous recovery was 29.5%, where 21.2% had endoscopic evidence of resolution and 8.3% had clinical improvement in their voice without endoscopic confirmation. The median time to symptom resolution was 4.0 months. Use of steroids was not linked with spontaneous recovery in multivariable analyses. CONCLUSION: The annual incidence of VFP (IVFP and IVFp) was 1.04 cases per 100,000 persons, with spontaneous recovery occurring in nearly a third of patients, regardless of steroid use.