Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 135
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(26): e2318361121, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38889147

RESUMO

When listeners hear a voice, they rapidly form a complex first impression of who the person behind that voice might be. We characterize how these multivariate first impressions from voices emerge over time across different levels of abstraction using electroencephalography and representational similarity analysis. We find that for eight perceived physical (gender, age, and health), trait (attractiveness, dominance, and trustworthiness), and social characteristics (educatedness and professionalism), representations emerge early (~80 ms after stimulus onset), with voice acoustics contributing to those representations between ~100 ms and 400 ms. While impressions of person characteristics are highly correlated, we can find evidence for highly abstracted, independent representations of individual person characteristics. These abstracted representationse merge gradually over time. That is, representations of physical characteristics (age, gender) arise early (from ~120 ms), while representations of some trait and social characteristics emerge later (~360 ms onward). The findings align with recent theoretical models and shed light on the computations underpinning person perception from voices.


Assuntos
Percepção Auditiva , Encéfalo , Eletroencefalografia , Voz , Humanos , Masculino , Feminino , Voz/fisiologia , Adulto , Encéfalo/fisiologia , Percepção Auditiva/fisiologia , Adulto Jovem , Percepção Social
2.
Proc Natl Acad Sci U S A ; 121(25): e2405588121, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38861607

RESUMO

Many animals can extract useful information from the vocalizations of other species. Neuroimaging studies have evidenced areas sensitive to conspecific vocalizations in the cerebral cortex of primates, but how these areas process heterospecific vocalizations remains unclear. Using fMRI-guided electrophysiology, we recorded the spiking activity of individual neurons in the anterior temporal voice patches of two macaques while they listened to complex sounds including vocalizations from several species. In addition to cells selective for conspecific macaque vocalizations, we identified an unsuspected subpopulation of neurons with strong selectivity for human voice, not merely explained by spectral or temporal structure of the sounds. The auditory representational geometry implemented by these neurons was strongly related to that measured in the human voice areas with neuroimaging and only weakly to low-level acoustical structure. These findings provide new insights into the neural mechanisms involved in auditory expertise and the evolution of communication systems in primates.


Assuntos
Percepção Auditiva , Imageamento por Ressonância Magnética , Neurônios , Vocalização Animal , Voz , Animais , Humanos , Neurônios/fisiologia , Voz/fisiologia , Imageamento por Ressonância Magnética/métodos , Vocalização Animal/fisiologia , Percepção Auditiva/fisiologia , Masculino , Macaca mulatta , Encéfalo/fisiologia , Estimulação Acústica , Mapeamento Encefálico/métodos
3.
Eur J Neurosci ; 60(2): 4078-4094, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38777332

RESUMO

Although the attractiveness of voices plays an important role in social interactions, it is unclear how voice attractiveness and social interest influence social decision-making. Here, we combined the ultimatum game with recording event-related brain potentials (ERPs) and examined the effect of attractive versus unattractive voices of the proposers, expressing positive versus negative social interest ("I like you" vs. "I don't like you"), on the acceptance of the proposal. Overall, fair offers were accepted at significantly higher rates than unfair offers, and high voice attractiveness increased acceptance rates for all proposals. In ERPs in response to the voices, their attractiveness and expressed social interests yielded early additive effects in the N1 component, followed by interactions in the subsequent P2, P3 and N400 components. More importantly, unfair offers elicited a larger Medial Frontal Negativity (MFN) than fair offers but only when the proposer's voice was unattractive or when the voice carried positive social interest. These results suggest that both voice attractiveness and social interest moderate social decision-making and there is a similar "beauty premium" for voices as for faces.


Assuntos
Tomada de Decisões , Potenciais Evocados , Voz , Humanos , Masculino , Feminino , Potenciais Evocados/fisiologia , Voz/fisiologia , Tomada de Decisões/fisiologia , Adulto Jovem , Adulto , Eletroencefalografia/métodos , Encéfalo/fisiologia , Adolescente
4.
Hum Brain Mapp ; 45(10): e26724, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39001584

RESUMO

Music is ubiquitous, both in its instrumental and vocal forms. While speech perception at birth has been at the core of an extensive corpus of research, the origins of the ability to discriminate instrumental or vocal melodies is still not well investigated. In previous studies comparing vocal and musical perception, the vocal stimuli were mainly related to speaking, including language, and not to the non-language singing voice. In the present study, to better compare a melodic instrumental line with the voice, we used singing as a comparison stimulus, to reduce the dissimilarities between the two stimuli as much as possible, separating language perception from vocal musical perception. In the present study, 45 newborns were scanned, 10 full-term born infants and 35 preterm infants at term-equivalent age (mean gestational age at test = 40.17 weeks, SD = 0.44) using functional magnetic resonance imaging while listening to five melodies played by a musical instrument (flute) or sung by a female voice. To examine the dynamic task-based effective connectivity, we employed a psychophysiological interaction of co-activation patterns (PPI-CAPs) analysis, using the auditory cortices as seed region, to investigate moment-to-moment changes in task-driven modulation of cortical activity during an fMRI task. Our findings reveal condition-specific, dynamically occurring patterns of co-activation (PPI-CAPs). During the vocal condition, the auditory cortex co-activates with the sensorimotor and salience networks, while during the instrumental condition, it co-activates with the visual cortex and the superior frontal cortex. Our results show that the vocal stimulus elicits sensorimotor aspects of the auditory perception and is processed as a more salient stimulus while the instrumental condition activated higher-order cognitive and visuo-spatial networks. Common neural signatures for both auditory stimuli were found in the precuneus and posterior cingulate gyrus. Finally, this study adds knowledge on the dynamic brain connectivity underlying the newborns capability of early and specialized auditory processing, highlighting the relevance of dynamic approaches to study brain function in newborn populations.


Assuntos
Percepção Auditiva , Imageamento por Ressonância Magnética , Música , Humanos , Feminino , Masculino , Percepção Auditiva/fisiologia , Recém-Nascido , Canto/fisiologia , Recém-Nascido Prematuro/fisiologia , Mapeamento Encefálico , Estimulação Acústica , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Voz/fisiologia
5.
Ear Hear ; 45(4): 952-968, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38616318

RESUMO

OBJECTIVES: Postlingually deaf adults with cochlear implants (CIs) have difficulties with perceiving differences in speakers' voice characteristics and benefit little from voice differences for the perception of speech in competing speech. However, not much is known yet about the perception and use of voice characteristics in prelingually deaf implanted children with CIs. Unlike CI adults, most CI children became deaf during the acquisition of language. Extensive neuroplastic changes during childhood could make CI children better at using the available acoustic cues than CI adults, or the lack of exposure to a normal acoustic speech signal could make it more difficult for them to learn which acoustic cues they should attend to. This study aimed to examine to what degree CI children can perceive voice cues and benefit from voice differences for perceiving speech in competing speech, comparing their abilities to those of normal-hearing (NH) children and CI adults. DESIGN: CI children's voice cue discrimination (experiment 1), voice gender categorization (experiment 2), and benefit from target-masker voice differences for perceiving speech in competing speech (experiment 3) were examined in three experiments. The main focus was on the perception of mean fundamental frequency (F0) and vocal-tract length (VTL), the primary acoustic cues related to speakers' anatomy and perceived voice characteristics, such as voice gender. RESULTS: CI children's F0 and VTL discrimination thresholds indicated lower sensitivity to differences compared with their NH-age-equivalent peers, but their mean discrimination thresholds of 5.92 semitones (st) for F0 and 4.10 st for VTL indicated higher sensitivity than postlingually deaf CI adults with mean thresholds of 9.19 st for F0 and 7.19 st for VTL. Furthermore, CI children's perceptual weighting of F0 and VTL cues for voice gender categorization closely resembled that of their NH-age-equivalent peers, in contrast with CI adults. Finally, CI children had more difficulties in perceiving speech in competing speech than their NH-age-equivalent peers, but they performed better than CI adults. Unlike CI adults, CI children showed a benefit from target-masker voice differences in F0 and VTL, similar to NH children. CONCLUSION: Although CI children's F0 and VTL voice discrimination scores were overall lower than those of NH children, their weighting of F0 and VTL cues for voice gender categorization and their benefit from target-masker differences in F0 and VTL resembled that of NH children. Together, these results suggest that prelingually deaf implanted CI children can effectively utilize spectrotemporally degraded F0 and VTL cues for voice and speech perception, generally outperforming postlingually deaf CI adults in comparable tasks. These findings underscore the presence of F0 and VTL cues in the CI signal to a certain degree and suggest other factors contributing to the perception challenges faced by CI adults.


Assuntos
Implante Coclear , Implantes Cocleares , Sinais (Psicologia) , Surdez , Percepção da Fala , Humanos , Surdez/reabilitação , Masculino , Feminino , Criança , Adulto , Adulto Jovem , Adolescente , Voz/fisiologia , Estudos de Casos e Controles , Pré-Escolar , Pessoa de Meia-Idade
6.
Conscious Cogn ; 123: 103718, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38880020

RESUMO

The phenomenon of "hearing voices" can be found not only in psychotic disorders, but also in the general population, with individuals across cultures reporting auditory perceptions of supernatural beings. In our preregistered study, we investigated a possible mechanism of such experiences, grounded in the predictive processing model of agency detection. We predicted that in a signal detection task, expecting less or more voices than actually present would drive the response bias toward a more conservative and liberal response strategy, respectively. Moreover, we hypothesized that including sensory noise would enhance these expectancy effects. In line with our predictions, the findings show that detection of voices relies on expectations and that this effect is especially pronounced in the case of unreliable sensory data. As such, the study contributes to our understanding of the predictive processes in hearing and the building blocks of voice hearing experiences.


Assuntos
Percepção Auditiva , Detecção de Sinal Psicológico , Humanos , Feminino , Adulto , Masculino , Adulto Jovem , Percepção Auditiva/fisiologia , Detecção de Sinal Psicológico/fisiologia , Voz/fisiologia , Antecipação Psicológica/fisiologia , Alucinações/fisiopatologia
7.
J Acoust Soc Am ; 155(1): 18-28, 2024 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-38169520

RESUMO

In an earlier study, we analyzed how audio signals obtained from three professional opera singers varied when they sang one octave wide eight-tone scales in ten different emotional colors. The results showed systematic variations in voice source and long-term-average spectrum (LTAS) parameters associated with major emotion "families". For two of the singers, subglottal pressure (PSub) also was recorded, thus allowing analysis of an additional main physiological voice control parameter, glottal resistance (defined as the ratio between PSub and glottal flow), and related to glottal adduction. In the present study, we analyze voice source and LTAS parameters derived from the audio signal and their correlation with Psub and glottal resistance. The measured parameters showed a systematic relationship with the four emotion families observed in our previous study. They also varied systematically with values of the ten emotions along the valence, power, and arousal dimensions; valence showed a significant correlation with the ratio between acoustic voice source energy and subglottal pressure, while Power varied significantly with sound level and two measures related to the spectral dominance of the lowest spectrum partial. the fundamental.


Assuntos
Canto , Voz , Humanos , Qualidade da Voz , Voz/fisiologia , Acústica , Glote/fisiologia
8.
J Acoust Soc Am ; 156(2): 1283-1308, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39172710

RESUMO

Sound for the human voice is produced by vocal fold flow-induced vibration and involves a complex coupling between flow dynamics, tissue motion, and acoustics. Over the past three decades, synthetic, self-oscillating vocal fold models have played an increasingly important role in the study of these complex physical interactions. In particular, two types of models have been established: "membranous" vocal fold models, such as a water-filled latex tube, and "elastic solid" models, such as ultrasoft silicone formed into a vocal fold-like shape and in some cases with multiple layers of differing stiffness to mimic the human vocal fold tissue structure. In this review, the designs, capabilities, and limitations of these two types of models are presented. Considerations unique to the implementation of elastic solid models, including fabrication processes and materials, are discussed. Applications in which these models have been used to study the underlying mechanical principles that govern phonation are surveyed, and experimental techniques and configurations are reviewed. Finally, recommendations for continued development of these models for even more lifelike response and clinical relevance are summarized.


Assuntos
Fonação , Vibração , Prega Vocal , Prega Vocal/fisiologia , Prega Vocal/anatomia & histologia , Humanos , Modelos Anatômicos , Fenômenos Biomecânicos , Voz/fisiologia , Elasticidade , Modelos Biológicos
9.
J Acoust Soc Am ; 156(1): 278-283, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38980102

RESUMO

How we produce and perceive voice is constrained by laryngeal physiology and biomechanics. Such constraints may present themselves as principal dimensions in the voice outcome space that are shared among speakers. This study attempts to identify such principal dimensions in the voice outcome space and the underlying laryngeal control mechanisms in a three-dimensional computational model of voice production. A large-scale voice simulation was performed with parametric variations in vocal fold geometry and stiffness, glottal gap, vocal tract shape, and subglottal pressure. Principal component analysis was applied to data combining both the physiological control parameters and voice outcome measures. The results showed three dominant dimensions accounting for at least 50% of the total variance. The first two dimensions describe respiratory-laryngeal coordination in controlling the energy balance between low- and high-frequency harmonics in the produced voice, and the third dimension describes control of the fundamental frequency. The dominance of these three dimensions suggests that voice changes along these principal dimensions are likely to be more consistently produced and perceived by most speakers than other voice changes, and thus are more likely to have emerged during evolution and be used to convey important personal information, such as emotion and larynx size.


Assuntos
Laringe , Fonação , Análise de Componente Principal , Humanos , Fenômenos Biomecânicos , Laringe/fisiologia , Laringe/anatomia & histologia , Voz/fisiologia , Prega Vocal/fisiologia , Prega Vocal/anatomia & histologia , Simulação por Computador , Qualidade da Voz , Acústica da Fala , Pressão , Modelos Biológicos , Modelos Anatômicos
10.
J Acoust Soc Am ; 155(6): 3822-3832, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38874464

RESUMO

This study proposes the use of vocal resonators to enhance cardiac auscultation signals and evaluates their performance for voice-noise suppression. Data were collected using two electronic stethoscopes while each study subject was talking. One collected auscultation signal from the chest while the other collected voice signals from one of the three voice resonators (cheek, back of the neck, and shoulder). The spectral subtraction method was applied to the signals. Both objective and subjective metrics were used to evaluate the quality of enhanced signals and to investigate the most effective vocal resonator for noise suppression. Our preliminary findings showed a significant improvement after enhancement and demonstrated the efficacy of vocal resonators. A listening survey was conducted with thirteen physicians to evaluate the quality of enhanced signals, and they have received significantly better scores regarding the sound quality than their original signals. The shoulder resonator group demonstrated significantly better sound quality than the cheek group when reducing voice sound in cardiac auscultation signals. The suggested method has the potential to be used for the development of an electronic stethoscope with a robust noise removal function. Significant clinical benefits are expected from the expedited preliminary diagnostic procedure.


Assuntos
Auscultação Cardíaca , Processamento de Sinais Assistido por Computador , Estetoscópios , Humanos , Auscultação Cardíaca/instrumentação , Auscultação Cardíaca/métodos , Auscultação Cardíaca/normas , Masculino , Feminino , Adulto , Ruídos Cardíacos/fisiologia , Espectrografia do Som , Desenho de Equipamento , Voz/fisiologia , Pessoa de Meia-Idade , Qualidade da Voz , Vibração , Ruído
11.
J Acoust Soc Am ; 156(2): 922-938, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39133041

RESUMO

Voices arguably occupy a superior role in auditory processing. Specifically, studies have reported that singing voices are processed faster and more accurately and possess greater salience in musical scenes compared to instrumental sounds. However, the underlying acoustic features of this superiority and the generality of these effects remain unclear. This study investigates the impact of frequency micro-modulations (FMM) and the influence of interfering sounds on sound recognition. Thirty young participants, half with musical training, engage in three sound recognition experiments featuring short vocal and instrumental sounds in a go/no-go task. Accuracy and reaction times are measured for sounds from recorded samples and excerpts of popular music. Each sound is presented in separate versions with and without FMM, in isolation or accompanied by a piano. Recognition varies across sound categories, but no general vocal superiority emerges and no effects of FMM. When presented together with interfering sounds, all sounds exhibit degradation in recognition. However, whereas /a/ sounds stand out by showing a distinct robustness to interference (i.e., less degradation of recognition), /u/ sounds lack this robustness. Acoustical analysis implies that recognition differences can be explained by spectral similarities. Together, these results challenge the notion of general vocal superiority in auditory perception.


Assuntos
Estimulação Acústica , Percepção Auditiva , Música , Reconhecimento Psicológico , Humanos , Masculino , Feminino , Adulto Jovem , Adulto , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Tempo de Reação , Canto , Voz/fisiologia , Adolescente , Espectrografia do Som , Qualidade da Voz
12.
Sensors (Basel) ; 24(4)2024 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-38400330

RESUMO

Respiratory diseases represent a significant global burden, necessitating efficient diagnostic methods for timely intervention. Digital biomarkers based on audio, acoustics, and sound from the upper and lower respiratory system, as well as the voice, have emerged as valuable indicators of respiratory functionality. Recent advancements in machine learning (ML) algorithms offer promising avenues for the identification and diagnosis of respiratory diseases through the analysis and processing of such audio-based biomarkers. An ever-increasing number of studies employ ML techniques to extract meaningful information from audio biomarkers. Beyond disease identification, these studies explore diverse aspects such as the recognition of cough sounds amidst environmental noise, the analysis of respiratory sounds to detect respiratory symptoms like wheezes and crackles, as well as the analysis of the voice/speech for the evaluation of human voice abnormalities. To provide a more in-depth analysis, this review examines 75 relevant audio analysis studies across three distinct areas of concern based on respiratory diseases' symptoms: (a) cough detection, (b) lower respiratory symptoms identification, and (c) diagnostics from the voice and speech. Furthermore, publicly available datasets commonly utilized in this domain are presented. It is observed that research trends are influenced by the pandemic, with a surge in studies on COVID-19 diagnosis, mobile data acquisition, and remote diagnosis systems.


Assuntos
Inteligência Artificial , COVID-19 , Humanos , COVID-19/diagnóstico , Tosse/diagnóstico , Tosse/fisiopatologia , Sons Respiratórios/diagnóstico , Sons Respiratórios/fisiopatologia , Aprendizado de Máquina , Doenças Respiratórias/diagnóstico , SARS-CoV-2/isolamento & purificação , Algoritmos , Voz/fisiologia
13.
JAMA ; 331(15): 1259-1261, 2024 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-38517420

RESUMO

In this Medical News article, Edward Chang, MD, chair of the department of neurological surgery at the University of California, San Francisco Weill Institute for Neurosciences joins JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, to discuss the potential for AI to revolutionize communication for those unable to speak due to aphasia.


Assuntos
Afasia , Inteligência Artificial , Avatar , Fala , Voz , Humanos , Fala/fisiologia , Voz/fisiologia , Qualidade da Voz , Afasia/etiologia , Afasia/terapia , Equipamentos e Provisões
14.
Nat Commun ; 15(1): 1873, 2024 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-38472193

RESUMO

Voice disorders resulting from various pathological vocal fold conditions or postoperative recovery of laryngeal cancer surgeries, are common causes of dysphonia. Here, we present a self-powered wearable sensing-actuation system based on soft magnetoelasticity that enables assisted speaking without relying on the vocal folds. It holds a lightweighted mass of approximately 7.2 g, skin-alike modulus of 7.83 × 105 Pa, stability against skin perspiration, and a maximum stretchability of 164%. The wearable sensing component can effectively capture extrinsic laryngeal muscle movement and convert them into high-fidelity and analyzable electrical signals, which can be translated into speech signals with the assistance of machine learning algorithms with an accuracy of 94.68%. Then, with the wearable actuation component, the speech could be expressed as voice signals while circumventing vocal fold vibration. We expect this approach could facilitate the restoration of normal voice function and significantly enhance the quality of life for patients with dysfunctional vocal folds.


Assuntos
Distúrbios da Voz , Voz , Dispositivos Eletrônicos Vestíveis , Humanos , Prega Vocal/fisiologia , Qualidade de Vida , Voz/fisiologia
15.
Sci Rep ; 14(1): 10488, 2024 05 07.
Artigo em Inglês | MEDLINE | ID: mdl-38714709

RESUMO

Vocal attractiveness influences important social outcomes. While most research on the acoustic parameters that influence vocal attractiveness has focused on the possible roles of sexually dimorphic characteristics of voices, such as fundamental frequency (i.e., pitch) and formant frequencies (i.e., a correlate of body size), other work has reported that increasing vocal averageness increases attractiveness. Here we investigated the roles these three characteristics play in judgments of the attractiveness of male and female voices. In Study 1, we found that increasing vocal averageness significantly decreased distinctiveness ratings, demonstrating that participants could detect manipulations of vocal averageness in this stimulus set and using this testing paradigm. However, in Study 2, we found no evidence that increasing averageness significantly increased attractiveness ratings of voices. In Study 3, we found that fundamental frequency was negatively correlated with male vocal attractiveness and positively correlated with female vocal attractiveness. By contrast with these results for fundamental frequency, vocal attractiveness and formant frequencies were not significantly correlated. Collectively, our results suggest that averageness may not necessarily significantly increase attractiveness judgments of voices and are consistent with previous work reporting significant associations between attractiveness and voice pitch.


Assuntos
Beleza , Voz , Humanos , Masculino , Feminino , Voz/fisiologia , Adulto , Adulto Jovem , Julgamento/fisiologia , Adolescente
16.
Otolaryngol Head Neck Surg ; 171(2): 340-352, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38822764

RESUMO

OBJECTIVE: The vocal biomarkers market was worth $1.9B in 2021 and is projected to exceed $5.1B by 2028, for a compound annual growth rate of 15.15%. The investment growth demonstrates a blossoming interest in voice and artificial intelligence (AI) as it relates to human health. The objective of this study was to map the current landscape of start-ups utilizing voice as a biomarker in health-tech. DATA SOURCES: A comprehensive search for start-ups was conducted using Google, LinkedIn, Twitter, and Facebook. A review of the research was performed using company website, PubMed, and Google Scholar. REVIEW METHODS: A 3-pronged approach was taken to thoroughly map the landscape. First, an internet search was conducted to identify current start-ups focusing on products relating to voice as a biomarker of health. Second, Crunchbase was utilized to collect financial and organizational information. Third, a review of the literature was conducted to analyze publications associated with the identified start-ups. RESULTS: A total of 27 start-up start-ups with a focus in the utilization of AI for developing biomarkers of health from the human voice were identified. Twenty-four of these start-ups garnered $178,808,039 in investments. The 27 start-ups published 194 publications combined, 128 (66%) of which were peer reviewed. CONCLUSION: There is growing enthusiasm surrounding voice as a biomarker in health-tech. Academic drive may complement commercialization to best achieve progress in this arena. More research is needed to accurately capture the entirety of the field, including larger industry players, academic institutions, and non-English content.


Assuntos
Biomarcadores , Voz , Humanos , Voz/fisiologia , Inteligência Artificial
17.
PeerJ ; 12: e16904, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38371372

RESUMO

Background: The ability to differentiate familiar from unfamiliar humans has been considered a product of domestication or early experience. Few studies have focused on voice recognition in Felidae despite the fact that this family presents the rare opportunity to compare domesticated species to their wild counterparts and to examine the role of human rearing. Methods: We tested whether non-domesticated Felidae species recognized familiar human voices by exposing them to audio playbacks of familiar and unfamiliar humans. In a pilot study, we presented seven cats of five species with playbacks of voices that varied in familiarity and use of the cats' names. In the main study, we presented 24 cats of 10 species with unfamiliar and then familiar voice playbacks using a habituation-dishabituation paradigm. We anticipated that human rearing and use of the cats' names would result in greater attention to the voices, as measured by the latency, intensity, and duration of responses regardless of subject sex and subfamily. Results: Cats responded more quickly and with greater intensity (e.g., full versus partial head turn, both ears moved versus one ear twitching) to the most familiar voice in both studies. They also responded for longer durations to the familiar voice compared to the unfamiliar voices in the main study. Use of the cats' name and rearing history did not significantly impact responding. These findings suggest that close human contact rather than domestication is associated with the ability to discriminate between human voices and that less social species may have socio-cognitive abilities akin to those of more gregarious species. With cats of all species being commonly housed in human care, it is important to know that they differentiate familiar from unfamiliar human voices.


Assuntos
Felidae , Voz , Humanos , Animais , Cuidadores , Projetos Piloto , Reconhecimento Psicológico/fisiologia , Voz/fisiologia
18.
Acta Otolaryngol ; 144(1): 65-70, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38265886

RESUMO

BACKGROUND: There is a lack of effective treatment for idiopathic unilateral vocal fold paralysis (IUVFP). A better phonation was reported by patients after laryngeal nerve stimulation during our clinical examination. OBJECTIVES: This study aims to investigate immediate effect of recurrent laryngeal nerve (RLN) stimulation on phonation in patients with IUVFP. MATERIAL AND METHODS: Sixty-two patients with clinically identified IUVFP underwent RLN stimulation with needle electrodes. Laryngoscopy, acoustic analysis, and voice perception assessment were performed for quantitative comparison of vocal function and voice quality before and after the intervention. RESULTS: Laryngoscopic images showed a larger motion range of the paralyzed vocal fold (p < .01) and better glottal closure (p < .01) after RLN stimulation. Acoustic analysis revealed that the dysphonia severity index increased significantly (p < .01) while the jitter and shimmer decreased after the intervention (p < .05). According to perceptual evaluation, RLN stimulation significantly increased RBH grades in patients with IUVFP (p < .01). Furthermore, the improvement in voice perception had a moderate positive correlation with the decrease in the glottal closure. CONCLUSIONS AND SIGNIFICANCE: This study shows a short-term improvement of phonation in IUVFP patients after RLN stimulation, which provides proof-of-concept for trialing a controlled delivery of RLN stimulation and assessing durability of any observed responses.


Assuntos
Paralisia das Pregas Vocais , Voz , Humanos , Nervo Laríngeo Recorrente , Prega Vocal , Paralisia das Pregas Vocais/terapia , Voz/fisiologia , Fonação/fisiologia
19.
J Speech Lang Hear Res ; 67(7): 2139-2158, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38875480

RESUMO

PURPOSE: This systematic review aimed to evaluate the effects of singing as an intervention for aging voice. METHOD: Quantitative studies of interventions for older adults with any medical condition that involves singing as training were reviewed, measured by respiration, phonation, and posture, which are the physical functions related to the aging voice. English and Chinese studies published until April 2024 were searched using 31 electronic databases, and seven studies were included. The included articles were assessed according to the Grading of Recommendations, Assessment, Development, and Evaluations rubric. RESULTS: Seven studies were included. These studies reported outcome measures that were related to respiratory functions only. For the intervention effect, statistically significant improvements were observed in five of the included studies, among which three studies had large effect sizes. The overall level of evidence of the included studies was not high, with three studies having moderate levels and the rest having lower levels. The intervention activities included trainings other than singing. These non-singing training items may have caused co-intervention bias in the study results. CONCLUSIONS: This systematic review suggests that singing as an intervention for older adults with respiratory and cognitive problems could improve respiration and respiratory-phonatory control. However, none of the included studies covers the other two of the physical functions related to aging voice (phonatory and postural functions). The overall level of evidence of the included studies was not high either. There is a need for more research evidence in singing-based intervention specifically for patient with aging voice.


Assuntos
Envelhecimento , Canto , Humanos , Idoso , Envelhecimento/fisiologia , Distúrbios da Voz/terapia , Fonação/fisiologia , Qualidade da Voz , Voz/fisiologia , Respiração , Postura/fisiologia , Idoso de 80 Anos ou mais
20.
Multisens Res ; 37(2): 125-141, 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38714314

RESUMO

Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.


Assuntos
Face , Confiança , Voz , Humanos , Feminino , Voz/fisiologia , Adulto Jovem , Adulto , Face/fisiologia , Percepção da Fala/fisiologia , Percepção da Altura Sonora/fisiologia , Reconhecimento Facial/fisiologia , Sinais (Psicologia) , Adolescente
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA