Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.937
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(26): e2318361121, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38889147

RESUMO

When listeners hear a voice, they rapidly form a complex first impression of who the person behind that voice might be. We characterize how these multivariate first impressions from voices emerge over time across different levels of abstraction using electroencephalography and representational similarity analysis. We find that for eight perceived physical (gender, age, and health), trait (attractiveness, dominance, and trustworthiness), and social characteristics (educatedness and professionalism), representations emerge early (~80 ms after stimulus onset), with voice acoustics contributing to those representations between ~100 ms and 400 ms. While impressions of person characteristics are highly correlated, we can find evidence for highly abstracted, independent representations of individual person characteristics. These abstracted representationse merge gradually over time. That is, representations of physical characteristics (age, gender) arise early (from ~120 ms), while representations of some trait and social characteristics emerge later (~360 ms onward). The findings align with recent theoretical models and shed light on the computations underpinning person perception from voices.


Assuntos
Percepção Auditiva , Encéfalo , Eletroencefalografia , Voz , Humanos , Masculino , Feminino , Voz/fisiologia , Adulto , Encéfalo/fisiologia , Percepção Auditiva/fisiologia , Adulto Jovem , Percepção Social
2.
Proc Natl Acad Sci U S A ; 121(25): e2405588121, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38861607

RESUMO

Many animals can extract useful information from the vocalizations of other species. Neuroimaging studies have evidenced areas sensitive to conspecific vocalizations in the cerebral cortex of primates, but how these areas process heterospecific vocalizations remains unclear. Using fMRI-guided electrophysiology, we recorded the spiking activity of individual neurons in the anterior temporal voice patches of two macaques while they listened to complex sounds including vocalizations from several species. In addition to cells selective for conspecific macaque vocalizations, we identified an unsuspected subpopulation of neurons with strong selectivity for human voice, not merely explained by spectral or temporal structure of the sounds. The auditory representational geometry implemented by these neurons was strongly related to that measured in the human voice areas with neuroimaging and only weakly to low-level acoustical structure. These findings provide new insights into the neural mechanisms involved in auditory expertise and the evolution of communication systems in primates.


Assuntos
Percepção Auditiva , Imageamento por Ressonância Magnética , Neurônios , Vocalização Animal , Voz , Animais , Humanos , Neurônios/fisiologia , Voz/fisiologia , Imageamento por Ressonância Magnética/métodos , Vocalização Animal/fisiologia , Percepção Auditiva/fisiologia , Masculino , Macaca mulatta , Encéfalo/fisiologia , Estimulação Acústica , Mapeamento Encefálico/métodos
3.
Proc Natl Acad Sci U S A ; 120(9): e2219394120, 2023 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-36802437

RESUMO

Vocal fatigue is a measurable form of performance fatigue resulting from overuse of the voice and is characterized by negative vocal adaptation. Vocal dose refers to cumulative exposure of the vocal fold tissue to vibration. Professionals with high vocal demands, such as singers and teachers, are especially prone to vocal fatigue. Failure to adjust habits can lead to compensatory lapses in vocal technique and an increased risk of vocal fold injury. Quantifying and recording vocal dose to inform individuals about potential overuse is an important step toward mitigating vocal fatigue. Previous work establishes vocal dosimetry methods, that is, processes to quantify vocal fold vibration dose but with bulky, wired devices that are not amenable to continuous use during natural daily activities; these previously reported systems also provide limited mechanisms for real-time user feedback. This study introduces a soft, wireless, skin-conformal technology that gently mounts on the upper chest to capture vibratory responses associated with vocalization in a manner that is immune to ambient noises. Pairing with a separate, wirelessly linked device supports haptic feedback to the user based on quantitative thresholds in vocal usage. A machine learning-based approach enables precise vocal dosimetry from the recorded data, to support personalized, real-time quantitation and feedback. These systems have strong potential to guide healthy behaviors in vocal use.


Assuntos
Canto , Distúrbios da Voz , Voz , Humanos , Retroalimentação , Distúrbios da Voz/etiologia , Voz/fisiologia , Prega Vocal/fisiologia
4.
PLoS Biol ; 20(7): e3001742, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35905075

RESUMO

Categorising voices is crucial for auditory-based social interactions. A recent study by Rupp and colleagues in PLOS Biology capitalises on human intracranial recordings to describe the spatiotemporal pattern of neural activity leading to voice-selective responses in associative auditory cortex.


Assuntos
Percepção Auditiva , Voz , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Mapeamento Encefálico , Humanos , Lobo Temporal , Voz/fisiologia
5.
Eur J Neurosci ; 60(2): 4078-4094, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38777332

RESUMO

Although the attractiveness of voices plays an important role in social interactions, it is unclear how voice attractiveness and social interest influence social decision-making. Here, we combined the ultimatum game with recording event-related brain potentials (ERPs) and examined the effect of attractive versus unattractive voices of the proposers, expressing positive versus negative social interest ("I like you" vs. "I don't like you"), on the acceptance of the proposal. Overall, fair offers were accepted at significantly higher rates than unfair offers, and high voice attractiveness increased acceptance rates for all proposals. In ERPs in response to the voices, their attractiveness and expressed social interests yielded early additive effects in the N1 component, followed by interactions in the subsequent P2, P3 and N400 components. More importantly, unfair offers elicited a larger Medial Frontal Negativity (MFN) than fair offers but only when the proposer's voice was unattractive or when the voice carried positive social interest. These results suggest that both voice attractiveness and social interest moderate social decision-making and there is a similar "beauty premium" for voices as for faces.


Assuntos
Tomada de Decisões , Potenciais Evocados , Voz , Humanos , Masculino , Feminino , Potenciais Evocados/fisiologia , Voz/fisiologia , Tomada de Decisões/fisiologia , Adulto Jovem , Adulto , Eletroencefalografia/métodos , Encéfalo/fisiologia , Adolescente
6.
Hum Brain Mapp ; 45(10): e26724, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39001584

RESUMO

Music is ubiquitous, both in its instrumental and vocal forms. While speech perception at birth has been at the core of an extensive corpus of research, the origins of the ability to discriminate instrumental or vocal melodies is still not well investigated. In previous studies comparing vocal and musical perception, the vocal stimuli were mainly related to speaking, including language, and not to the non-language singing voice. In the present study, to better compare a melodic instrumental line with the voice, we used singing as a comparison stimulus, to reduce the dissimilarities between the two stimuli as much as possible, separating language perception from vocal musical perception. In the present study, 45 newborns were scanned, 10 full-term born infants and 35 preterm infants at term-equivalent age (mean gestational age at test = 40.17 weeks, SD = 0.44) using functional magnetic resonance imaging while listening to five melodies played by a musical instrument (flute) or sung by a female voice. To examine the dynamic task-based effective connectivity, we employed a psychophysiological interaction of co-activation patterns (PPI-CAPs) analysis, using the auditory cortices as seed region, to investigate moment-to-moment changes in task-driven modulation of cortical activity during an fMRI task. Our findings reveal condition-specific, dynamically occurring patterns of co-activation (PPI-CAPs). During the vocal condition, the auditory cortex co-activates with the sensorimotor and salience networks, while during the instrumental condition, it co-activates with the visual cortex and the superior frontal cortex. Our results show that the vocal stimulus elicits sensorimotor aspects of the auditory perception and is processed as a more salient stimulus while the instrumental condition activated higher-order cognitive and visuo-spatial networks. Common neural signatures for both auditory stimuli were found in the precuneus and posterior cingulate gyrus. Finally, this study adds knowledge on the dynamic brain connectivity underlying the newborns capability of early and specialized auditory processing, highlighting the relevance of dynamic approaches to study brain function in newborn populations.


Assuntos
Percepção Auditiva , Imageamento por Ressonância Magnética , Música , Humanos , Feminino , Masculino , Percepção Auditiva/fisiologia , Recém-Nascido , Canto/fisiologia , Recém-Nascido Prematuro/fisiologia , Mapeamento Encefálico , Estimulação Acústica , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Voz/fisiologia
7.
Exp Brain Res ; 242(1): 225-239, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37999725

RESUMO

The present study examined opposing and following vocal responses to altered auditory feedback (AAF) to determine how damage to left-hemisphere brain networks impairs the internal forward model and feedback mechanisms in post-stroke aphasia. Forty-nine subjects with aphasia and sixty age-matched controls performed speech vowel production tasks while their auditory feedback was altered using randomized ± 100 cents upward and downward pitch-shift stimuli. Data analysis revealed that when vocal responses were averaged across all trials (i.e., opposing and following), the overall magnitude of vocal compensation was significantly reduced in the aphasia group compared with controls. In addition, when vocal responses were analyzed separately for opposing and following trials, subjects in the aphasia group showed a significantly lower percentage of opposing and higher percentage of following vocal response trials compared with controls, particularly for the upward pitch-shift stimuli. However, there was no significant difference in the magnitude of opposing and following vocal responses between the two groups. These findings further support previous evidence on the impairment of vocal sensorimotor control in aphasia and provide new insights into the distinctive impact of left-hemisphere stroke on the internal forward model and feedback mechanisms. In this context, we propose that the lower percentage of opposing responses in aphasia may be accounted for by deficits in feedback-dependent mechanisms of audio-vocal integration and motor control. In addition, the higher percentage of following responses may reflect aberrantly increased reliance of the speech system on the internal forward model for generating sensory predictions during vocal error detection and motor control.


Assuntos
Afasia , Voz , Humanos , Retroalimentação , Percepção da Altura Sonora/fisiologia , Voz/fisiologia , Fala/fisiologia , Retroalimentação Sensorial/fisiologia , Afasia/etiologia
8.
Ear Hear ; 45(4): 952-968, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38616318

RESUMO

OBJECTIVES: Postlingually deaf adults with cochlear implants (CIs) have difficulties with perceiving differences in speakers' voice characteristics and benefit little from voice differences for the perception of speech in competing speech. However, not much is known yet about the perception and use of voice characteristics in prelingually deaf implanted children with CIs. Unlike CI adults, most CI children became deaf during the acquisition of language. Extensive neuroplastic changes during childhood could make CI children better at using the available acoustic cues than CI adults, or the lack of exposure to a normal acoustic speech signal could make it more difficult for them to learn which acoustic cues they should attend to. This study aimed to examine to what degree CI children can perceive voice cues and benefit from voice differences for perceiving speech in competing speech, comparing their abilities to those of normal-hearing (NH) children and CI adults. DESIGN: CI children's voice cue discrimination (experiment 1), voice gender categorization (experiment 2), and benefit from target-masker voice differences for perceiving speech in competing speech (experiment 3) were examined in three experiments. The main focus was on the perception of mean fundamental frequency (F0) and vocal-tract length (VTL), the primary acoustic cues related to speakers' anatomy and perceived voice characteristics, such as voice gender. RESULTS: CI children's F0 and VTL discrimination thresholds indicated lower sensitivity to differences compared with their NH-age-equivalent peers, but their mean discrimination thresholds of 5.92 semitones (st) for F0 and 4.10 st for VTL indicated higher sensitivity than postlingually deaf CI adults with mean thresholds of 9.19 st for F0 and 7.19 st for VTL. Furthermore, CI children's perceptual weighting of F0 and VTL cues for voice gender categorization closely resembled that of their NH-age-equivalent peers, in contrast with CI adults. Finally, CI children had more difficulties in perceiving speech in competing speech than their NH-age-equivalent peers, but they performed better than CI adults. Unlike CI adults, CI children showed a benefit from target-masker voice differences in F0 and VTL, similar to NH children. CONCLUSION: Although CI children's F0 and VTL voice discrimination scores were overall lower than those of NH children, their weighting of F0 and VTL cues for voice gender categorization and their benefit from target-masker differences in F0 and VTL resembled that of NH children. Together, these results suggest that prelingually deaf implanted CI children can effectively utilize spectrotemporally degraded F0 and VTL cues for voice and speech perception, generally outperforming postlingually deaf CI adults in comparable tasks. These findings underscore the presence of F0 and VTL cues in the CI signal to a certain degree and suggest other factors contributing to the perception challenges faced by CI adults.


Assuntos
Implante Coclear , Implantes Cocleares , Sinais (Psicologia) , Surdez , Percepção da Fala , Humanos , Surdez/reabilitação , Masculino , Feminino , Criança , Adulto , Adulto Jovem , Adolescente , Voz/fisiologia , Estudos de Casos e Controles , Pré-Escolar , Pessoa de Meia-Idade
9.
Conscious Cogn ; 123: 103718, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38880020

RESUMO

The phenomenon of "hearing voices" can be found not only in psychotic disorders, but also in the general population, with individuals across cultures reporting auditory perceptions of supernatural beings. In our preregistered study, we investigated a possible mechanism of such experiences, grounded in the predictive processing model of agency detection. We predicted that in a signal detection task, expecting less or more voices than actually present would drive the response bias toward a more conservative and liberal response strategy, respectively. Moreover, we hypothesized that including sensory noise would enhance these expectancy effects. In line with our predictions, the findings show that detection of voices relies on expectations and that this effect is especially pronounced in the case of unreliable sensory data. As such, the study contributes to our understanding of the predictive processes in hearing and the building blocks of voice hearing experiences.


Assuntos
Percepção Auditiva , Detecção de Sinal Psicológico , Humanos , Feminino , Adulto , Masculino , Adulto Jovem , Percepção Auditiva/fisiologia , Detecção de Sinal Psicológico/fisiologia , Voz/fisiologia , Antecipação Psicológica/fisiologia , Alucinações/fisiopatologia
10.
Proc Natl Acad Sci U S A ; 118(52)2021 12 28.
Artigo em Inglês | MEDLINE | ID: mdl-34930846

RESUMO

Humans have an extraordinary ability to recognize and differentiate voices. It is yet unclear whether voices are uniquely processed in the human brain. To explore the underlying neural mechanisms of voice processing, we recorded electrocorticographic signals from intracranial electrodes in epilepsy patients while they listened to six different categories of voice and nonvoice sounds. Subregions in the temporal lobe exhibited preferences for distinct voice stimuli, which were defined as "voice patches." Latency analyses suggested a dual hierarchical organization of the voice patches. We also found that voice patches were functionally connected under both task-engaged and resting states. Furthermore, the left motor areas were coactivated and correlated with the temporal voice patches during the sound-listening task. Taken together, this work reveals hierarchical cortical networks in the human brain for processing human voices.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Vias Neurais/fisiologia , Voz/fisiologia , Adulto , Eletrocorticografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
11.
Altern Ther Health Med ; 30(9): 85-89, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38518141

RESUMO

Objective: To investigate an alternative approach to family participatory nursing in neonatal intensive care units (NICUs) during the COVID-19 pandemic, focusing on auditory interventions to mitigate the effects of maternal separation (MS) on neonatal neurological development. Methods: This study was a randomized, double-blind, prospective trial involving 100 newborns younger than 6 months old, born between January 2022 and October 2022, who experienced MS for more than 2 weeks. Newborns were randomly allocated into control and study groups using a computer-generated list to ensure unbiased selection. Inclusion criteria were gestational age ≥37 weeks and admission to NICU due to various medical conditions; exclusion criteria included severe hearing impairment and congenital neurological disorders. The intervention group received maternal voice exposure at 40-50 dB for eight 30-minute sessions daily, while the control group was exposed to children's songs at the same volume and duration. Key metrics such as oxygen saturation, heart rate, Neonatal Infant Pain Scale (NIPS) scores, and Neonatal Behavioral Neurological Assessment (NBNA) scores were measured before and after the intervention period, which lasted one week. Results: Post-intervention, the NIPS scores in the intervention group were significantly lower (3.45±0.99) compared to the control group (5.36±0.49, P < .01), indicating reduced pain sensitivity. Additionally, NBNA scores were higher in the intervention group (39.90±1.56) than in the control group (35.86±1.05, P < .01), suggesting enhanced neurological development. No significant difference in pre-intervention blood oxygen saturation levels was observed between the groups. However, the intervention group showed less reduction in oxygen saturation during and post-blood collection, with significantly higher levels at 2, 4, and 6 minutes post-procedure (P < .01). The findings underscore the significance of maternal voice as a non-pharmacological intervention to alleviate pain and foster neurological development in neonates facing MS, especially in situations where traditional family participatory nursing is hindered by the COVID-19 pandemic. Integrating maternal voice stimulation into neonatal care strategies offers a viable method to improve outcomes for newborns undergoing MS. Conclusion: Maternal voice intervention presents a promising strategy to diminish pain sensitivity and bolster neurological development in neonates separated from their mothers, particularly when family participatory nursing practices are constrained by pandemic-related restrictions. These findings advocate for the broader implementation of maternal voice stimulation in NICU settings.


Assuntos
COVID-19 , Humanos , Recém-Nascido , COVID-19/prevenção & controle , COVID-19/epidemiologia , Feminino , Masculino , Método Duplo-Cego , Voz/fisiologia , Estudos Prospectivos , Mães/psicologia , SARS-CoV-2 , Neurônios Motores/fisiologia , Unidades de Terapia Intensiva Neonatal , Adulto , Lactente
12.
J Acoust Soc Am ; 155(1): 18-28, 2024 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-38169520

RESUMO

In an earlier study, we analyzed how audio signals obtained from three professional opera singers varied when they sang one octave wide eight-tone scales in ten different emotional colors. The results showed systematic variations in voice source and long-term-average spectrum (LTAS) parameters associated with major emotion "families". For two of the singers, subglottal pressure (PSub) also was recorded, thus allowing analysis of an additional main physiological voice control parameter, glottal resistance (defined as the ratio between PSub and glottal flow), and related to glottal adduction. In the present study, we analyze voice source and LTAS parameters derived from the audio signal and their correlation with Psub and glottal resistance. The measured parameters showed a systematic relationship with the four emotion families observed in our previous study. They also varied systematically with values of the ten emotions along the valence, power, and arousal dimensions; valence showed a significant correlation with the ratio between acoustic voice source energy and subglottal pressure, while Power varied significantly with sound level and two measures related to the spectral dominance of the lowest spectrum partial. the fundamental.


Assuntos
Canto , Voz , Humanos , Qualidade da Voz , Voz/fisiologia , Acústica , Glote/fisiologia
13.
J Acoust Soc Am ; 156(2): 1283-1308, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39172710

RESUMO

Sound for the human voice is produced by vocal fold flow-induced vibration and involves a complex coupling between flow dynamics, tissue motion, and acoustics. Over the past three decades, synthetic, self-oscillating vocal fold models have played an increasingly important role in the study of these complex physical interactions. In particular, two types of models have been established: "membranous" vocal fold models, such as a water-filled latex tube, and "elastic solid" models, such as ultrasoft silicone formed into a vocal fold-like shape and in some cases with multiple layers of differing stiffness to mimic the human vocal fold tissue structure. In this review, the designs, capabilities, and limitations of these two types of models are presented. Considerations unique to the implementation of elastic solid models, including fabrication processes and materials, are discussed. Applications in which these models have been used to study the underlying mechanical principles that govern phonation are surveyed, and experimental techniques and configurations are reviewed. Finally, recommendations for continued development of these models for even more lifelike response and clinical relevance are summarized.


Assuntos
Fonação , Vibração , Prega Vocal , Prega Vocal/fisiologia , Prega Vocal/anatomia & histologia , Humanos , Modelos Anatômicos , Fenômenos Biomecânicos , Voz/fisiologia , Elasticidade , Modelos Biológicos
14.
J Acoust Soc Am ; 155(6): 3822-3832, 2024 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-38874464

RESUMO

This study proposes the use of vocal resonators to enhance cardiac auscultation signals and evaluates their performance for voice-noise suppression. Data were collected using two electronic stethoscopes while each study subject was talking. One collected auscultation signal from the chest while the other collected voice signals from one of the three voice resonators (cheek, back of the neck, and shoulder). The spectral subtraction method was applied to the signals. Both objective and subjective metrics were used to evaluate the quality of enhanced signals and to investigate the most effective vocal resonator for noise suppression. Our preliminary findings showed a significant improvement after enhancement and demonstrated the efficacy of vocal resonators. A listening survey was conducted with thirteen physicians to evaluate the quality of enhanced signals, and they have received significantly better scores regarding the sound quality than their original signals. The shoulder resonator group demonstrated significantly better sound quality than the cheek group when reducing voice sound in cardiac auscultation signals. The suggested method has the potential to be used for the development of an electronic stethoscope with a robust noise removal function. Significant clinical benefits are expected from the expedited preliminary diagnostic procedure.


Assuntos
Auscultação Cardíaca , Processamento de Sinais Assistido por Computador , Estetoscópios , Humanos , Auscultação Cardíaca/instrumentação , Auscultação Cardíaca/métodos , Auscultação Cardíaca/normas , Masculino , Feminino , Adulto , Ruídos Cardíacos/fisiologia , Espectrografia do Som , Desenho de Equipamento , Voz/fisiologia , Pessoa de Meia-Idade , Qualidade da Voz , Vibração , Ruído
15.
J Acoust Soc Am ; 156(1): 278-283, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38980102

RESUMO

How we produce and perceive voice is constrained by laryngeal physiology and biomechanics. Such constraints may present themselves as principal dimensions in the voice outcome space that are shared among speakers. This study attempts to identify such principal dimensions in the voice outcome space and the underlying laryngeal control mechanisms in a three-dimensional computational model of voice production. A large-scale voice simulation was performed with parametric variations in vocal fold geometry and stiffness, glottal gap, vocal tract shape, and subglottal pressure. Principal component analysis was applied to data combining both the physiological control parameters and voice outcome measures. The results showed three dominant dimensions accounting for at least 50% of the total variance. The first two dimensions describe respiratory-laryngeal coordination in controlling the energy balance between low- and high-frequency harmonics in the produced voice, and the third dimension describes control of the fundamental frequency. The dominance of these three dimensions suggests that voice changes along these principal dimensions are likely to be more consistently produced and perceived by most speakers than other voice changes, and thus are more likely to have emerged during evolution and be used to convey important personal information, such as emotion and larynx size.


Assuntos
Laringe , Fonação , Análise de Componente Principal , Humanos , Fenômenos Biomecânicos , Laringe/fisiologia , Laringe/anatomia & histologia , Voz/fisiologia , Prega Vocal/fisiologia , Prega Vocal/anatomia & histologia , Simulação por Computador , Qualidade da Voz , Acústica da Fala , Pressão , Modelos Biológicos , Modelos Anatômicos
16.
J Acoust Soc Am ; 156(2): 922-938, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39133041

RESUMO

Voices arguably occupy a superior role in auditory processing. Specifically, studies have reported that singing voices are processed faster and more accurately and possess greater salience in musical scenes compared to instrumental sounds. However, the underlying acoustic features of this superiority and the generality of these effects remain unclear. This study investigates the impact of frequency micro-modulations (FMM) and the influence of interfering sounds on sound recognition. Thirty young participants, half with musical training, engage in three sound recognition experiments featuring short vocal and instrumental sounds in a go/no-go task. Accuracy and reaction times are measured for sounds from recorded samples and excerpts of popular music. Each sound is presented in separate versions with and without FMM, in isolation or accompanied by a piano. Recognition varies across sound categories, but no general vocal superiority emerges and no effects of FMM. When presented together with interfering sounds, all sounds exhibit degradation in recognition. However, whereas /a/ sounds stand out by showing a distinct robustness to interference (i.e., less degradation of recognition), /u/ sounds lack this robustness. Acoustical analysis implies that recognition differences can be explained by spectral similarities. Together, these results challenge the notion of general vocal superiority in auditory perception.


Assuntos
Estimulação Acústica , Percepção Auditiva , Música , Reconhecimento Psicológico , Humanos , Masculino , Feminino , Adulto Jovem , Adulto , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Tempo de Reação , Canto , Voz/fisiologia , Adolescente , Espectrografia do Som , Qualidade da Voz
17.
Sensors (Basel) ; 24(4)2024 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-38400330

RESUMO

Respiratory diseases represent a significant global burden, necessitating efficient diagnostic methods for timely intervention. Digital biomarkers based on audio, acoustics, and sound from the upper and lower respiratory system, as well as the voice, have emerged as valuable indicators of respiratory functionality. Recent advancements in machine learning (ML) algorithms offer promising avenues for the identification and diagnosis of respiratory diseases through the analysis and processing of such audio-based biomarkers. An ever-increasing number of studies employ ML techniques to extract meaningful information from audio biomarkers. Beyond disease identification, these studies explore diverse aspects such as the recognition of cough sounds amidst environmental noise, the analysis of respiratory sounds to detect respiratory symptoms like wheezes and crackles, as well as the analysis of the voice/speech for the evaluation of human voice abnormalities. To provide a more in-depth analysis, this review examines 75 relevant audio analysis studies across three distinct areas of concern based on respiratory diseases' symptoms: (a) cough detection, (b) lower respiratory symptoms identification, and (c) diagnostics from the voice and speech. Furthermore, publicly available datasets commonly utilized in this domain are presented. It is observed that research trends are influenced by the pandemic, with a surge in studies on COVID-19 diagnosis, mobile data acquisition, and remote diagnosis systems.


Assuntos
Inteligência Artificial , COVID-19 , Humanos , COVID-19/diagnóstico , Tosse/diagnóstico , Tosse/fisiopatologia , Sons Respiratórios/diagnóstico , Sons Respiratórios/fisiopatologia , Aprendizado de Máquina , Doenças Respiratórias/diagnóstico , SARS-CoV-2/isolamento & purificação , Algoritmos , Voz/fisiologia
18.
Folia Phoniatr Logop ; 76(1): 1-21, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37393892

RESUMO

PURPOSE: Considering the conceptual migration from vocal load and vocal loading to vocal demand and vocal demand response, this review of literature aimed to identify physiological explanations, reported measurements, and associated factors (vocal demands) reported in the literature when considering the phonatory response to a vocal demand. METHODS: A systematic review of literature, following the PRISMA Statement, was conducted using Web of Science, PubMed, Scopus, and ScienceDirect. Data were analyzed and presented in two parts. First, a bibliometric analysis, co-occurrence analysis, and content analysis were performed. Three criteria that got article inclusion were defined: (1) written in English, Spanish, and Portuguese; (2) published between 2009 and 2021; and (3) focused on vocal load and loading, vocal demand response, and voice assessment parameters. A total of 54 publications met the criteria and were included in this review. The second part included a conceptual framework based on the content analysis of three aspects of vocal demand response: (1) physiological explanations, (2) reported measurements, and (3) vocal demands. RESULTS AND CONCLUSION: As would be expected since vocal demand response is a relatively new term and not yet commonly used in literature when discussing way that the speakers respond to communicative scenarios, most of the studies reviewed (both historical and recent) still use the term of vocal load and vocal loading. Although there is a broad variety of literature discussing a wide range of vocal demands and voice parameters used to characterize the vocal demand response, results show that there is consistency across the studies. While vocal demand response is unique and intrinsic to the talker, associated factors that contribute to this response include both internal talker and external talker factors. Internal factors include muscle stiffness, viscosity in the phonatory system, vocal fold tissue damage, elevated sound pressure levels during occupational voice demands, extended periods of voice use, suboptimal body posture, difficulties in breathing technique, and sleep disturbances. Associated external factors include the working environment (noise, acoustics, temperature, humidity). In conclusion, although vocal demand response is intrinsic to the speaker, the speaker's response is affected by external vocal demands. However, due to the wide methods to evaluate vocal demand response, it has been difficult to establish its contribution to voice disorders in the general population and, specifically, among occupational voice users. This literature review identified commonly reported parameters and factors that may help clinicians and researchers define vocal demand response.


Assuntos
Distúrbios da Voz , Voz , Humanos , Qualidade da Voz , Voz/fisiologia , Fonação/fisiologia , Prega Vocal
19.
J Neurosci ; 42(20): 4164-4173, 2022 05 18.
Artigo em Inglês | MEDLINE | ID: mdl-35483917

RESUMO

The social worlds of young children primarily revolve around parents and caregivers, who play a key role in guiding children's social and cognitive development. However, a hallmark of adolescence is a shift in orientation toward nonfamilial social targets, an adaptive process that prepares adolescents for their independence. Little is known regarding neurobiological signatures underlying changes in adolescents' social orientation. Using functional brain imaging of human voice processing in children and adolescents (ages 7-16), we demonstrate distinct neural signatures for mother's voice and nonfamilial voices across child and adolescent development in reward and social valuation systems, instantiated in nucleus accumbens and ventromedial prefrontal cortex. While younger children showed greater activity in these brain systems for mother's voice compared with nonfamilial voices, older adolescents showed the opposite effect with increased activity for nonfamilial compared with mother's voice. Findings uncover a critical role for reward and social valuative brain systems in the pronounced changes in adolescents' orientation toward nonfamilial social targets. Our approach provides a template for examining developmental shifts in social reward and motivation in individuals with pronounced social impairments, including adolescents with autism.SIGNIFICANCE STATEMENT Children's social worlds undergo a transformation during adolescence. While socialization in young children revolves around parents and caregivers, adolescence is characterized by a shift in social orientation toward nonfamilial social partners. Here we show that this shift is reflected in neural activity measured from reward processing regions in response to brief vocal samples. When younger children hear their mother's voice, reward processing regions show greater activity compared with when they hear nonfamilial, unfamiliar voices. Strikingly, older adolescents show the opposite effect, with increased activity for nonfamilial compared with mother's voice. Findings identify the brain basis of adolescents' switch in social orientation toward nonfamilial social partners and provides a template for understanding neurodevelopment in clinical populations with social and communication difficulties.


Assuntos
Transtorno Autístico , Voz , Adolescente , Encéfalo/fisiologia , Criança , Pré-Escolar , Feminino , Humanos , Mães , Recompensa , Voz/fisiologia
20.
Neuroimage ; 278: 120282, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37468021

RESUMO

The posterior superior temporal gyrus (pSTG) has been implicated in the integration of auditory feedback and motor system for controlling vocal production. However, the question as to whether and how the pSTG is causally involved in vocal feedback control is currently unclear. To this end, the present study selectively stimulated the left or right pSTG with continuous theta burst stimulation (c-TBS) in healthy participants, then used event-related potentials to investigate neurobehavioral changes in response to altered auditory feedback during vocal pitch regulation. The results showed that, compared to control (vertex) stimulation, c-TBS over the right pSTG led to smaller vocal compensations for pitch perturbations accompanied by smaller cortical N1 and larger P2 responses. Enhanced P2 responses received contributions from the right-lateralized temporal and parietal regions as well as the insula, and were significantly correlated with suppressed vocal compensations. Surprisingly, these effects were not found when comparing c-TBS over the left pSTG with control stimulation. Our findings provide evidence, for the first time, that supports a causal relationship between right, but not left, pSTG and auditory-motor integration for vocal pitch regulation. This lends support to a right-lateralized contribution of the pSTG in not only the bottom-up detection of vocal feedback errors but also the involvement of driving motor commands for error correction in a top-down manner.


Assuntos
Fala , Voz , Humanos , Fala/fisiologia , Área de Wernicke , Retroalimentação , Percepção da Altura Sonora/fisiologia , Voz/fisiologia , Retroalimentação Sensorial/fisiologia , Estimulação Acústica/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA