Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 9.487
Filtrar
1.
Multisens Res ; 37(2): 125-141, 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38714314

RESUMO

Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.


Assuntos
Face , Confiança , Voz , Humanos , Feminino , Voz/fisiologia , Adulto Jovem , Adulto , Face/fisiologia , Percepção da Fala/fisiologia , Percepção da Altura Sonora/fisiologia , Reconhecimento Facial/fisiologia , Sinais (Psicologia) , Adolescente
2.
PLoS One ; 19(5): e0301786, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38696537

RESUMO

OBJECTIVE: To systematically evaluate the evidence for the reliability, sensitivity and specificity of existing measures of vowel-initial voice onset. METHODS: A literature search was conducted across electronic databases for published studies (MEDLINE, EMBASE, Scopus, Web of Science, CINAHL, PubMed Central, IEEE Xplore) and grey literature (ProQuest for unpublished dissertations) measuring vowel onset. Eligibility criteria included research of any study design type or context focused on measuring human voice onset on an initial vowel. Two independent reviewers were involved at each stage of title and abstract screening, data extraction and analysis. Data extracted included measures used, their reliability, sensitivity and specificity. Risk of bias and certainty of evidence was assessed using GRADE as the data of interest was extracted. RESULTS: The search retrieved 6,983 records. Titles and abstracts were screened against the inclusion criteria by two independent reviewers, with a third reviewer responsible for conflict resolution. Thirty-five papers were included in the review, which identified five categories of voice onset measurement: auditory perceptual, acoustic, aerodynamic, physiological and visual imaging. Reliability was explored in 14 papers with varied reliability ratings, while sensitivity was rarely assessed, and no assessment of specificity was conducted across any of the included records. Certainty of evidence ranged from very low to moderate with high variability in methodology and voice onset measures used. CONCLUSIONS: A range of vowel-initial voice onset measurements have been applied throughout the literature, however, there is a lack of evidence regarding their sensitivity, specificity and reliability in the detection and discrimination of voice onset types. Heterogeneity in study populations and methods used preclude conclusions on the most valid measures. There is a clear need for standardisation of research methodology, and for future studies to examine the practicality of these measures in research and clinical settings.


Assuntos
Sensibilidade e Especificidade , Humanos , Reprodutibilidade dos Testes , Voz
3.
Sci Rep ; 14(1): 10488, 2024 05 07.
Artigo em Inglês | MEDLINE | ID: mdl-38714709

RESUMO

Vocal attractiveness influences important social outcomes. While most research on the acoustic parameters that influence vocal attractiveness has focused on the possible roles of sexually dimorphic characteristics of voices, such as fundamental frequency (i.e., pitch) and formant frequencies (i.e., a correlate of body size), other work has reported that increasing vocal averageness increases attractiveness. Here we investigated the roles these three characteristics play in judgments of the attractiveness of male and female voices. In Study 1, we found that increasing vocal averageness significantly decreased distinctiveness ratings, demonstrating that participants could detect manipulations of vocal averageness in this stimulus set and using this testing paradigm. However, in Study 2, we found no evidence that increasing averageness significantly increased attractiveness ratings of voices. In Study 3, we found that fundamental frequency was negatively correlated with male vocal attractiveness and positively correlated with female vocal attractiveness. By contrast with these results for fundamental frequency, vocal attractiveness and formant frequencies were not significantly correlated. Collectively, our results suggest that averageness may not necessarily significantly increase attractiveness judgments of voices and are consistent with previous work reporting significant associations between attractiveness and voice pitch.


Assuntos
Beleza , Voz , Humanos , Masculino , Feminino , Voz/fisiologia , Adulto , Adulto Jovem , Julgamento/fisiologia , Adolescente
4.
PLoS One ; 19(5): e0302739, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38728329

RESUMO

BACKGROUND: Deep brain stimulation (DBS) reliably ameliorates cardinal motor symptoms in Parkinson's disease (PD) and essential tremor (ET). However, the effects of DBS on speech, voice and language have been inconsistent and have not been examined comprehensively in a single study. OBJECTIVE: We conducted a systematic analysis of literature by reviewing studies that examined the effects of DBS on speech, voice and language in PD and ET. METHODS: A total of 675 publications were retrieved from PubMed, Embase, CINHAL, Web of Science, Cochrane Library and Scopus databases. Based on our selection criteria, 90 papers were included in our analysis. The selected publications were categorized into four subcategories: Fluency, Word production, Articulation and phonology and Voice quality. RESULTS: The results suggested a long-term decline in verbal fluency, with more studies reporting deficits in phonemic fluency than semantic fluency following DBS. Additionally, high frequency stimulation, left-sided and bilateral DBS were associated with worse verbal fluency outcomes. Naming improved in the short-term following DBS-ON compared to DBS-OFF, with no long-term differences between the two conditions. Bilateral and low-frequency DBS demonstrated a relative improvement for phonation and articulation. Nonetheless, long-term DBS exacerbated phonation and articulation deficits. The effect of DBS on voice was highly variable, with both improvements and deterioration in different measures of voice. CONCLUSION: This was the first study that aimed to combine the outcome of speech, voice, and language following DBS in a single systematic review. The findings revealed a heterogeneous pattern of results for speech, voice, and language across DBS studies, and provided directions for future studies.


Assuntos
Estimulação Encefálica Profunda , Idioma , Doença de Parkinson , Fala , Voz , Estimulação Encefálica Profunda/métodos , Humanos , Doença de Parkinson/terapia , Doença de Parkinson/fisiopatologia , Fala/fisiologia , Voz/fisiologia , Tremor Essencial/terapia , Tremor Essencial/fisiopatologia
5.
Commun Biol ; 7(1): 540, 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38714798

RESUMO

The genetic influence on human vocal pitch in tonal and non-tonal languages remains largely unknown. In tonal languages, such as Mandarin Chinese, pitch changes differentiate word meanings, whereas in non-tonal languages, such as Icelandic, pitch is used to convey intonation. We addressed this question by searching for genetic associations with interindividual variation in median pitch in a Chinese major depression case-control cohort and compared our results with a genome-wide association study from Iceland. The same genetic variant, rs11046212-T in an intron of the ABCC9 gene, was one of the most strongly associated loci with median pitch in both samples. Our meta-analysis revealed four genome-wide significant hits, including two novel associations. The discovery of genetic variants influencing vocal pitch across both tonal and non-tonal languages suggests the possibility of a common genetic contribution to the human vocal system shared in two distinct populations with languages that differ in tonality (Icelandic and Mandarin).


Assuntos
Estudo de Associação Genômica Ampla , Idioma , Humanos , Masculino , Feminino , Polimorfismo de Nucleotídeo Único , Adulto , Islândia , Estudos de Casos e Controles , Pessoa de Meia-Idade , Voz/fisiologia , Percepção da Altura Sonora , Povo Asiático/genética
6.
PLoS One ; 19(4): e0301336, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38625932

RESUMO

Recognizing the real emotion of humans is considered the most essential task for any customer feedback or medical applications. There are many methods available to recognize the type of emotion from speech signal by extracting frequency, pitch, and other dominant features. These features are used to train various models to auto-detect various human emotions. We cannot completely rely on the features of speech signals to detect the emotion, for instance, a customer is angry but still, he is speaking at a low voice (frequency components) which will eventually lead to wrong predictions. Even a video-based emotion detection system can be fooled by false facial expressions for various emotions. To rectify this issue, we need to make a parallel model that will train on textual data and make predictions based on the words present in the text. The model will then classify the type of emotions using more comprehensive information, thus making it a more robust model. To address this issue, we have tested four text-based classification models to classify the emotions of a customer. We examined the text-based models and compared their results which showed that the modified Encoder decoder model with attention mechanism trained on textual data achieved an accuracy of 93.5%. This research highlights the pressing need for more robust emotion recognition systems and underscores the potential of transfer models with attention mechanisms to significantly improve feedback management processes and the medical applications.


Assuntos
Emoções , Voz , Masculino , Humanos , Fala , Linguística , Reconhecimento Psicológico
7.
Sensors (Basel) ; 24(7)2024 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-38610256

RESUMO

The ongoing biodiversity crisis, driven by factors such as land-use change and global warming, emphasizes the need for effective ecological monitoring methods. Acoustic monitoring of biodiversity has emerged as an important monitoring tool. Detecting human voices in soundscape monitoring projects is useful both for analyzing human disturbance and for privacy filtering. Despite significant strides in deep learning in recent years, the deployment of large neural networks on compact devices poses challenges due to memory and latency constraints. Our approach focuses on leveraging knowledge distillation techniques to design efficient, lightweight student models for speech detection in bioacoustics. In particular, we employed the MobileNetV3-Small-Pi model to create compact yet effective student architectures to compare against the larger EcoVAD teacher model, a well-regarded voice detection architecture in eco-acoustic monitoring. The comparative analysis included examining various configurations of the MobileNetV3-Small-Pi-derived student models to identify optimal performance. Additionally, a thorough evaluation of different distillation techniques was conducted to ascertain the most effective method for model selection. Our findings revealed that the distilled models exhibited comparable performance to the EcoVAD teacher model, indicating a promising approach to overcoming computational barriers for real-time ecological monitoring.


Assuntos
Fala , Voz , Humanos , Acústica , Biodiversidade , Conhecimento
8.
J Acoust Soc Am ; 155(4): 2603-2611, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38629881

RESUMO

Open science practices have led to an increase in available speech datasets for researchers interested in acoustic analysis. Accurate evaluation of these databases frequently requires manual or semi-automated analysis. The time-intensive nature of these analyses makes them ideally suited for research assistants in laboratories focused on speech and voice production. However, the completion of high-quality, consistent, and reliable analyses requires clear rules and guidelines for all research assistants to follow. This tutorial will provide information on training and mentoring research assistants to complete these analyses, covering areas including RA training, ongoing data analysis monitoring, and documentation needed for reliable and re-creatable findings.


Assuntos
Distúrbios da Voz , Voz , Humanos , Acústica , Fala
9.
Sci Rep ; 14(1): 8977, 2024 04 18.
Artigo em Inglês | MEDLINE | ID: mdl-38637516

RESUMO

Why do we prefer some singers to others? We investigated how much singing voice preferences can be traced back to objective features of the stimuli. To do so, we asked participants to rate short excerpts of singing performances in terms of how much they liked them as well as in terms of 10 perceptual attributes (e.g.: pitch accuracy, tempo, breathiness). We modeled liking ratings based on these perceptual ratings, as well as based on acoustic features and low-level features derived from Music Information Retrieval (MIR). Mean liking ratings for each stimulus were highly correlated between Experiments 1 (online, US-based participants) and 2 (in the lab, German participants), suggesting a role for attributes of the stimuli in grounding average preferences. We show that acoustic and MIR features barely explain any variance in liking ratings; in contrast, perceptual features of the voices achieved around 43% of prediction. Inter-rater agreement in liking and perceptual ratings was low, indicating substantial (and unsurprising) individual differences in participants' preferences and perception of the stimuli. Our results indicate that singing voice preferences are not grounded in acoustic attributes of the voices per se, but in how these features are perceptually interpreted by listeners.


Assuntos
Música , Canto , Voz , Humanos , Qualidade da Voz , Acústica
10.
Int J Pediatr Otorhinolaryngol ; 180: 111962, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38657429

RESUMO

PURPOSE: In this prospective study, we aimed to investigate the difference in voice acoustic parameters between girls with idiopathic central precocious puberty (ICPP) and those who developed normally during prepuberty. MATERIALS AND METHODS: Our study recruited 54 girls diagnosed with ICPP and randomly sampled 51 healthy prepubertal girls as the control. Tanner stages, circulating hormone levels and bone ages of the girls with ICPP and the age and body mass index (BMI) of all participants were recorded. Acoustic analyses were performed using PRAAT computer-based voice analysis software and the mean pitch (F0), jitter, shimmer, noise-to harmonic-ratio (NHR) and harmonic-to-noise ratio (HNR) values were compared in the patient and control groups. RESULTS: The two groups did not significantly differ in age or BMI. In the evaluation of the F0 and jitter values, we were found to be lower in the control group than in the patient group. However, we did not find a statistical significance. The mean shimmer values of the patient group were significantly higher than those of the control group. In addition, a statistically significant difference was noted for the mean HNR and NHR values (P < 0.001). A moderate negative correlation was found between shimmer and hormone levels in the patient group. CONCLUSIONS: Voice acoustic parameters one of the defining features of girls with ICPP. Voice changes in acoustic parameters could reflect hormonal changes during puberty. Clinicians should suspect ICPP when there is a change in the voice.


Assuntos
Puberdade Precoce , Humanos , Puberdade Precoce/sangue , Feminino , Criança , Estudos Prospectivos , Qualidade da Voz/fisiologia , Acústica da Fala , Estudos de Casos e Controles , Voz/fisiologia , Índice de Massa Corporal
11.
Psychol Sci ; 35(5): 543-557, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38620057

RESUMO

Recently, gender-ambiguous (nonbinary) voices have been added to voice assistants to combat gender stereotypes and foster inclusion. However, if people react negatively to such voices, these laudable efforts may be counterproductive. In five preregistered studies (N = 3,684 adult participants) we found that people do react negatively, rating products described by narrators with gender-ambiguous voices less favorably than when they are described by clearly male or female narrators. The voices create a feeling of unease, or social disfluency, that affects evaluations of the products being described. These effects are best explained by low familiarity with voices that sound ambiguous. Thus, initial negative reactions can be overcome with more exposure.


Assuntos
Voz , Humanos , Feminino , Masculino , Adulto , Adulto Jovem , Estereotipagem , Percepção Social , Identidade de Gênero , Adolescente , Pessoa de Meia-Idade
12.
J Speech Lang Hear Res ; 67(5): 1413-1423, 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38625128

RESUMO

PURPOSE: We studied the role of gender in metacognition of voice emotion recognition ability (ERA), reflected by self-rated confidence (SRC). To this end, we guided our study in two approaches: first, by examining the role of gender in voice ERA and SRC independently and second, by looking for gender effects on the ERA association with SRC. METHOD: We asked 100 participants (50 men, 50 women) to interpret a set of vocal expressions portrayed by 30 actors (16 men, 14 women) as defined by their emotional meaning. Targets were 180 repetitive lexical sentences articulated in congruent emotional voices (anger, sadness, surprise, happiness, fear) and neutral expressions. Trial by trial, the participants were assigned retrospective SRC based on their emotional recognition performance. RESULTS: A binomial generalized linear mixed model (GLMM) estimating ERA accuracy revealed a significant gender effect, with women encoders (speakers) yielding higher accuracy levels than men. There was no significant effect of the decoder's (listener's) gender. A second GLMM estimating SRC found a significant effect of encoder and decoder genders, with women outperforming men. Gamma correlations were significantly greater than zero for women and men decoders. CONCLUSIONS: In spite of varying interpretations of gender in each independent rating (ERA and SRC), our results suggest that both men and women decoders were accurate in their metacognition regarding voice emotion recognition. Further research is needed to study how individuals of both genders use metacognitive knowledge in their emotional recognition and whether and how such knowledge contributes to effective social communication.


Assuntos
Emoções , Voz , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Fatores Sexuais , Percepção da Fala , Metacognição/fisiologia , Reconhecimento Psicológico , Adolescente
13.
Rev. logop. foniatr. audiol. (Ed. impr.) ; 44(1): [100330], Ene-Mar, 2024. ilus, tab
Artigo em Inglês | IBECS | ID: ibc-231906

RESUMO

Introduction: To use a test in a language or culture other than the original it is necessary to carry out, in addition to its adaptation, a psychometric validation. This systematic review assesses the validation studies of the voice self-report scales in Spanish. Methods: A systematic review was performed searching ten databases. The assessment was carried out following the criteria proposed by Terwee et al. (2007) together with some specifically proposed for this study. Validation studies in Spanish of self-report voice scales published in indexed journals were included and the search was updated on February 2nd, 2023. Results: 15 studies that evaluated 12 scales were reviewed. It was verified that not all the validations were adjusted to the criteria used and that the properties to verify the metric strength of the validations were, in general, few. Conclusions: This systematic review shows that the included studies do not report much evidence of metric quality. It should be considered that different strategies have currently been developed to obtain more and better evidence of reliability and validity. Our contribution is to reflect on the usual practice of validation of self-report scales in Spanish language. The most important weakness is the possibility of using broader and more current evaluation protocols. We also propose to continue this work, completing it with a meta-analytic study.(AU)


Introducción: Para utilizar una prueba en una lengua o cultura distinta de la original es preciso realizar, además de su adaptación, una validación psicométrica. Esta revisión sistemática valora los estudios de validación de las escalas de autoinforme de voz en español. Método: Se realizó una revisión sistemática buscando en diez bases de datos. La valoración se llevó a cabo siguiendo los criterios propuestos por Terwee et al. (2007) junto con algunos específicamente propuestos para este trabajo. Se incluyeron estudios de validación en español de escalas de autoinforme publicados en revistas indexadas. La última búsqueda fue realizada el 2 de febrero de 2023. Resultados: Se revisaron 15 trabajos que evaluaron 12 escalas. Se comprobó que no todas las validaciones se ajustaron a los criterios utilizados y que las propiedades para comprobar la robustez métrica de estas fueron, por lo general, pocas.Conclusiones: Esta revisión sistemática muestra que los estudios incluidos no reportan demasiada evidencia de calidad métrica. Debería considerarse que en la actualidad se han desarrollado diferentes estrategias para obtener más y mejor evidencia de fiabilidad y validez. Nuestra contribución ha sido valorar la práctica de la validación de las escalas de autoinforme en lengua española. La más importante debilidad es la posibilidad de usar algún protocolo más amplio y actual. También proponemos continuar este trabajo con un estudio metaanalítico.(AU)


Assuntos
Humanos , Masculino , Feminino , Voz , Psicometria , Fonoaudiologia , Autorrelato
14.
Sensors (Basel) ; 24(5)2024 Feb 25.
Artigo em Inglês | MEDLINE | ID: mdl-38475029

RESUMO

In recent years, there has been a notable rise in the number of patients afflicted with laryngeal diseases, including cancer, trauma, and other ailments leading to voice loss. Currently, the market is witnessing a pressing demand for medical and healthcare products designed to assist individuals with voice defects, prompting the invention of the artificial throat (AT). This user-friendly device eliminates the need for complex procedures like phonation reconstruction surgery. Therefore, in this review, we will initially give a careful introduction to the intelligent AT, which can act not only as a sound sensor but also as a thin-film sound emitter. Then, the sensing principle to detect sound will be discussed carefully, including capacitive, piezoelectric, electromagnetic, and piezoresistive components employed in the realm of sound sensing. Following this, the development of thermoacoustic theory and different materials made of sound emitters will also be analyzed. After that, various algorithms utilized by the intelligent AT for speech pattern recognition will be reviewed, including some classical algorithms and neural network algorithms. Finally, the outlook, challenge, and conclusion of the intelligent AT will be stated. The intelligent AT presents clear advantages for patients with voice impairments, demonstrating significant social values.


Assuntos
Faringe , Voz , Humanos , Som , Algoritmos , Redes Neurais de Computação
15.
Nat Commun ; 15(1): 1873, 2024 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-38472193

RESUMO

Voice disorders resulting from various pathological vocal fold conditions or postoperative recovery of laryngeal cancer surgeries, are common causes of dysphonia. Here, we present a self-powered wearable sensing-actuation system based on soft magnetoelasticity that enables assisted speaking without relying on the vocal folds. It holds a lightweighted mass of approximately 7.2 g, skin-alike modulus of 7.83 × 105 Pa, stability against skin perspiration, and a maximum stretchability of 164%. The wearable sensing component can effectively capture extrinsic laryngeal muscle movement and convert them into high-fidelity and analyzable electrical signals, which can be translated into speech signals with the assistance of machine learning algorithms with an accuracy of 94.68%. Then, with the wearable actuation component, the speech could be expressed as voice signals while circumventing vocal fold vibration. We expect this approach could facilitate the restoration of normal voice function and significantly enhance the quality of life for patients with dysfunctional vocal folds.


Assuntos
Distúrbios da Voz , Voz , Dispositivos Eletrônicos Vestíveis , Humanos , Prega Vocal/fisiologia , Qualidade de Vida , Voz/fisiologia
16.
JASA Express Lett ; 4(3)2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38426889

RESUMO

The discovery that listeners more accurately identify words repeated in the same voice than in a different voice has had an enormous influence on models of representation and speech perception. Widely replicated in English, we understand little about whether and how this effect generalizes across languages. In a continuous recognition memory study with Hindi speakers and listeners (N = 178), we replicated the talker-specificity effect for accuracy-based measures (hit rate and D'), and found the latency advantage to be marginal (p = 0.06). These data help us better understand talker-specificity effects cross-linguistically and highlight the importance of expanding work to less studied languages.


Assuntos
Percepção da Fala , Voz , Humanos , Idioma , Reconhecimento Psicológico
17.
JAMA ; 331(15): 1259-1261, 2024 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-38517420

RESUMO

In this Medical News article, Edward Chang, MD, chair of the department of neurological surgery at the University of California, San Francisco Weill Institute for Neurosciences joins JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, to discuss the potential for AI to revolutionize communication for those unable to speak due to aphasia.


Assuntos
Afasia , Inteligência Artificial , Avatar , Fala , Voz , Humanos , Fala/fisiologia , Voz/fisiologia , Qualidade da Voz , Afasia/etiologia , Afasia/terapia , Equipamentos e Provisões
18.
J Dr Nurs Pract ; 17(1): 3-10, 2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38538113

RESUMO

Background: Many health professionals report feeling uncomfortable talking with patients who hear voices. Patients who hear voices report feeling a lack of support and empathy from emergency nurses. A local emergency department reported a need for training for nurses in the care of behavioral health patients. Objective: The aim of this study is to implement a quality improvement project using a hearing voices simulation. Empathy was measured using the Toronto Empathy Questionnaire, and a post-intervention survey was used to evaluate emergency nurses' perception of the professional development session. Methods: The quality improvement project included the implementation of a hearing voices simulation with emergency nurses. A paired t-test was used to determine the differences in the nurses empathy levels pre-and post-simulation. Qualitative data was collected on the nurses' experience during the simulation debriefing. A Likert-style questionnaire was used to collect data on the nurses' evaluation of the simulation. Results: The results of the hearing voices simulation were a statistically significant increase (p < .00) in empathy from baseline (M = 47.95, SD = 6.55) to post-intervention empathy scores (M = 48.93, SD = 6.89). The results of the post-simulation survey indicated that nurses felt that the hearing voices simulation was useful (n = 100; 98%) and helped them to feel more empathetic toward patients who hear voices (n = 98; 96%). Conclusions: Using a hearing voices simulation may help emergency nurses feel more empathetic toward the behavioral health patients who hear voices. Implications for Nursing: Through the implementation of a hearing voices simulation, clinical staff educators can provide support to staff nurses in the care of behavioral health patients.


Assuntos
Empatia , Voz , Humanos , Alucinações , Emoções , Audição
19.
BMJ Open ; 14(2): e076998, 2024 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-38401896

RESUMO

INTRODUCTION: Over the past decade, several machine learning (ML) algorithms have been investigated to assess their efficacy in detecting voice disorders. Literature indicates that ML algorithms can detect voice disorders with high accuracy. This suggests that ML has the potential to assist clinicians in the analysis and treatment outcome evaluation of voice disorders. However, despite numerous research studies, none of the algorithms have been sufficiently reliable to be used in clinical settings. Through this review, we aim to identify critical issues that have inhibited the use of ML algorithms in clinical settings by identifying standard audio tasks, acoustic features, processing algorithms and environmental factors that affect the efficacy of those algorithms. METHODS: We will search the following databases: Web of Science, Scopus, Compendex, CINAHL, Medline, IEEE Explore and Embase. Our search strategy has been developed with the assistance of the university library staff to accommodate the different syntactical requirements. The literature search will include the period between 2013 and 2023, and will be confined to articles published in English. We will exclude editorials, ongoing studies and working papers. The selection, extraction and analysis of the search data will be conducted using the 'Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for scoping reviews' system. The same system will also be used for the synthesis of the results. ETHICS AND DISSEMINATION: This scoping review does not require ethics approval as the review solely consists of peer-reviewed publications. The findings will be presented in peer-reviewed publications related to voice pathology.


Assuntos
Distúrbios da Voz , Voz , Humanos , Distúrbios da Voz/diagnóstico , Algoritmos , MEDLINE , Aprendizado de Máquina , Revisões Sistemáticas como Assunto , Literatura de Revisão como Assunto
20.
Chaos ; 34(2)2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38386906

RESUMO

In humans, ventricular folds are located superiorly to the vocal folds. Under special circumstances such as voice pathology or singing, they vibrate together with the vocal folds to contribute to the production of voice. In the present study, experimental data measured from physical models of the vocal and ventricular folds were analyzed in the light of nonlinear dynamics. The physical models provide a useful experimental framework to study the biomechanics of human vocalizations. Of particular interest in this experiment are co-oscillations of the vocal and ventricular folds, occasionally accompanied by irregular dynamics. We show that such a system can be regarded as two coupled oscillators, which give rise to various cooperative behaviors such as synchronized oscillations with a 1:1 or 1:2 frequency ratio and desynchronized oscillations with torus or chaos. The insight gained from the view of nonlinear dynamics should be of significant use for the diagnosis of voice pathologies, such as ventricular fold dysphonia.


Assuntos
Prega Vocal , Voz , Humanos , Dinâmica não Linear , Fenômenos Biomecânicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...