Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Acoust Soc Am ; 155(1): 44-55, 2024 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-38174965

RESUMO

In speech production research, talkers often perform a speech task several times per recording session with different speaking styles or in different environments. For example, Lombard speech studies typically have talkers speak in several different noise conditions. However, it is unknown to what degree simple repetition of a speech task affects speech acoustic characteristics or whether repetition effects might offset or exaggerate effects of speaking style or environment. The present study assessed speech acoustic changes over four within-session repetitions of a speech production taskset performed with two speaking styles recorded in separate sessions: conversational and clear speech. In each style, ten talkers performed a set of three speech tasks four times. Speaking rate, median fundamental frequency, fundamental frequency range, and mid-frequency spectral energy for read sentences were measured and compared across test blocks both within-session and between the two styles. Results indicate that statistically significant changes can occur from one repetition of a speech task to the next, even with a brief practice set and especially in the conversational style. While these changes were smaller than speaking style differences, these findings support using a complete speech set for training while talkers acclimate to the task and to the laboratory environment.


Assuntos
Percepção da Fala , Fala , Acústica , Ruído/efeitos adversos , Inteligibilidade da Fala
2.
J Appl Microbiol ; 132(3): 1856-1865, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34787955

RESUMO

AIMS: This study assessed the use of high-energy, visible light on the survival rates of three bacteria commonly found in middle ear infections (i.e. otitis media; Streptococcus pneumoniae, Moraxella catarrhalis and Haemophilus influenzae). METHOD AND RESULTS: Bacteria were cultured and then subjected to a single, 4-h treatment of 405 nm wavelength light at two different intensities. All three bacteria species were susceptible to the light at clinically significant rates (>99.9% reduction). Bacteria were susceptible to the high-energy visible (HEV) light in a dose-dependent manner (lower survival rates with increased intensity and duration of exposure). CONCLUSIONS: The results suggest that HEV light may provide a non-surgical, non-pharmaceutical approach to the therapeutic treatment of otitis media. SIGNIFICANCE AN IMPACT OF THE STUDY: Given the growing concerns surrounding antibiotic resistance, this study demonstrates a rapid, alternative method for effective inactivation of bacterial pathogens partly responsible for instances of otitis media.


Assuntos
Otite Média com Derrame , Otite Média , Haemophilus influenzae , Humanos , Luz , Moraxella catarrhalis , Otite Média/microbiologia , Otite Média/terapia , Otite Média com Derrame/microbiologia
3.
Ear Hear ; 43(2): 398-407, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34310412

RESUMO

OBJECTIVES: Individuals with cochlear implants (CIs) show reduced word and auditory emotion recognition abilities relative to their peers with normal hearing. Modern CI processing strategies are designed to preserve acoustic cues requisite for word recognition rather than those cues required for accessing other signal information (e.g., talker gender or emotional state). While word recognition is undoubtedly important for communication, the inaccessibility of this additional signal information in speech may lead to negative social experiences and outcomes for individuals with hearing loss. This study aimed to evaluate whether the emphasis on word recognition preservation in CI processing has unintended consequences on the perception of other talker information, such as emotional state. DESIGN: Twenty-four young adult listeners with normal hearing listened to sentences and either reported a target word in each sentence (word recognition task) or selected the emotion of the talker (emotion recognition task) from a list of options (Angry, Calm, Happy, and Sad). Sentences were blocked by task type (emotion recognition versus word recognition) and processing condition (unprocessed versus 8-channel noise vocoder) and presented randomly within the block at three signal-to-noise ratios (SNRs) in a background of speech-shaped noise. Confusion matrices showed the number of errors in emotion recognition by listeners. RESULTS: Listeners demonstrated better emotion recognition performance than word recognition performance at the same SNR. Unprocessed speech resulted in higher recognition rates than vocoded stimuli. Recognition performance (for both words and emotions) decreased with worsening SNR. Vocoding speech resulted in a greater negative impact on emotion recognition than it did for word recognition. CONCLUSIONS: These data confirm prior work that suggests that in background noise, emotional prosodic information in speech is easier to recognize than word information, even after simulated CI processing. However, emotion recognition may be more negatively impacted by background noise and CI processing than word recognition. Future work could explore CI processing strategies that better encode prosodic information and investigate this effect in individuals with CIs as opposed to vocoded simulation. This study emphasized the need for clinicians to consider not only word recognition but also other aspects of speech that are critical to successful social communication.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Emoções , Humanos , Fala , Adulto Jovem
4.
J Acoust Soc Am ; 151(5): 3234, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35649947

RESUMO

Due to global shifts at educational institutions from in-person courses to online formats caused by the COVID-19 pandemic, the current study aimed to estimate whether currently available virtual audiology education tools are helpful for acquiring necessary audiology skills and knowledge from the perspective of both educators and students. Therefore, a remote survey was developed and distributed to faculty and students in undergraduate communication sciences disorders and graduate audiology programs. Although participation was somewhat limited, the trends observed in the survey results suggested that the majority of both educators and students found the subset of virtual tools easy to use, that these tools improved teaching methods and learning outcomes, and that these tools would likely be used again.


Assuntos
Audiologia , COVID-19 , Audiologia/educação , COVID-19/epidemiologia , Docentes , Humanos , Pandemias , Estudantes
5.
J Acoust Soc Am ; 151(5): 3031, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35649917

RESUMO

One's ability to express confidence is critical to achieve one's goals in a social context-such as commanding respect from others, establishing higher social status, and persuading others. How individuals perceive confidence may be shaped by the socio-indexical cues produced by the speaker. In the current production/perception study, we asked four speakers (two cisgender women/men) to answer trivia questions under three speaking contexts: natural, overconfident, and underconfident (i.e., lack of confidence). An evaluation of the speakers' acoustics indicated that the speakers significantly varied their acoustic cues as a function of speaking context and that the women and men had significantly different acoustic cues. The speakers' answers to the trivia questions in the three contexts (natural, overconfident, underconfident) were then presented to listeners (N = 26) in a social judgment task using a computer mouse-tracking paradigm. Listeners were sensitive to the speakers' acoustic modulations of confidence and differentially interpreted these cues based on the perceived gender of the speaker, thereby impacting listeners' cognition and social decision making. We consider, then, how listeners' social judgments about confidence were impacted by gender stereotypes about women and men from social, heuristic-based processes.


Assuntos
Percepção da Fala , Voz , Acústica , Sinais (Psicologia) , Feminino , Humanos , Julgamento
6.
J Acoust Soc Am ; 145(6): 3410, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31255138

RESUMO

In the current study, an interactive approach is used to explore possible contributors to the misattributions listeners make about female talker expression of confidence. To do this, the expression and identification of confidence was evaluated through the evaluation of talker- (e.g., talker knowledge and affective acoustic modulation) and listener-specific factors (e.g., interaction between talker acoustic cues and listener knowledge). Talker and listener contexts were manipulated by implementing a social constraint for talkers and withholding information from listeners. Results indicated that listeners were sensitive to acoustic information produced by the female talkers in this study. However, when world knowledge and acoustics competed, judgments of talker confidence by listeners were less accurate. In fact, acoustic cues to female talker confidence were more accurately used by listeners as a cue to perceived confidence when relevant world knowledge was missing. By targeting speech dynamics between female talkers and both female and male listeners, the current study provides a better understanding of how confidence is realized acoustically and, perhaps more importantly, how those cues may be interpreted/misinterpreted by listeners.


Assuntos
Percepção Auditiva/fisiologia , Compreensão/fisiologia , Sinais (Psicologia) , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Acústica , Adulto , Feminino , Humanos , Masculino
7.
Front Psychol ; 14: 1125164, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38155698

RESUMO

Introduction: Socio-indexical cues to gender and vocal affect often interact and sometimes lead listeners to make differential judgements of affective intent based on the gender of the speaker. Previous research suggests that rising intonation is a common cue that both women and men produce to communicate lack of confidence, but listeners are more sensitive to this cue when it is produced by women. Some speech perception theories assume that listeners will track conditional statistics of speech and language cues (e.g., frequency of the socio-indexical cues to gender and affect) in their listening and communication environments during speech perception. It is currently less clear if these conditional statistics will impact listener ratings when context varies (e.g., number of talkers). Methods: To test this, we presented listeners with vocal utterances from one female and one male-pitched voice (single talker condition) or many female/male-pitched voices (4 female voices; 4 female voices pitch-shifted to a male range) to examine how they impacted perceptions of talker confidence. Results: Results indicated that when one voice was evaluated, listeners defaulted to the gender stereotype that the female voice using rising intonation (a cue to lack of confidence) was less confident than the male-pitched voice (using the same cue). However, in the multi-talker condition, this effect went away and listeners equally rated the confidence of the female and male-pitched voices. Discussion: Findings support dual process theories of information processing, such that listeners may rely on heuristics when speech perception is devoid of context, but when there are no differentiating qualities across talkers (regardless of gender), listeners may be ideal adapters who focus on only the relevant cues.

8.
Am J Audiol ; 31(3S): 1052-1058, 2022 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-35985309

RESUMO

PURPOSE: With the rapid development of new technologies and resources, many avenues exist to adapt and grow as a profession. Embracing change can lead to growth, evolution, and new opportunities. Audiologists have the potential to harness many of these technological advancements to improve patient health care. Adoption and incorporation of these new technologies will likely benefit educational experiences, research methods, clinical practice, and clinical outcomes. METHOD: This commentary highlights some historical perspectives and accepted practices while illustrating opportunities to embrace new ideas and technologies. We also provide examples of how such adoption may yield positive outcomes. Specifically, we address embracing technology in audiology education, how artificial intelligence may influence patient performance in realistic listening scenarios, the convergence between hearing aids and consumer electronics, and the emergence of audiology telehealth services and their inclusion in clinical practice. Models of change are also discussed and related to audiology. CONCLUSION: This commentary aims to be a call to action for the entire profession of audiology to consider conscientiously the adoption of useful, evidence-based technological advancements in education, research, and clinical practice.


Assuntos
Audiologia , Auxiliares de Audição , Inteligência Artificial , Audiologistas , Audiologia/métodos , Escolaridade , Humanos
9.
Brain Sci ; 12(2)2022 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-35203973

RESUMO

A previous investigation demonstrated differences between younger adult normal-hearing listeners and older adult hearing-impaired listeners in the perceived emotion of clear and conversational speech. Specifically, clear speech sounded angry more often than conversational speech for both groups, but the effect was smaller for the older listeners. These listener groups differed by two confounding factors, age (younger vs. older adults) and hearing status (normal vs. impaired). The objective of the present study was to evaluate the contributions of aging and hearing loss to the reduced perception of anger in older adults with hearing loss. We investigated perceived anger in clear and conversational speech in younger adults with and without a simulated age-related hearing loss, and in older adults with normal hearing. Younger adults with simulated hearing loss performed similarly to normal-hearing peers, while normal-hearing older adults performed similarly to hearing-impaired peers, suggesting that aging was the primary contributor to the decreased anger perception seen in previous work. These findings confirm reduced anger perception for older adults compared to younger adults, though the significant speaking style effect-regardless of age and hearing status-highlights the need to identify methods of producing clear speech that is emotionally neutral or positive.

10.
J Speech Lang Hear Res ; 64(5): 1758-1772, 2021 05 11.
Artigo em Inglês | MEDLINE | ID: mdl-33830784

RESUMO

Purpose Word recognition in quiet and in background noise has been thoroughly investigated in previous research to establish segmental speech recognition performance as a function of stimulus characteristics (e.g., audibility). Similar methods to investigate recognition performance for suprasegmental information (e.g., acoustic cues used to make judgments of talker age, sex, or emotional state) have not been performed. In this work, we directly compared emotion and word recognition performance in different levels of background noise to identify psychoacoustic properties of emotion recognition (globally and for specific emotion categories) relative to word recognition. Method Twenty young adult listeners with normal hearing listened to sentences and either reported a target word in each sentence or selected the emotion of the talker from a list of options (angry, calm, happy, and sad) at four signal-to-noise ratios in a background of white noise. Psychometric functions were fit to the recognition data and used to estimate thresholds (midway points on the function) and slopes for word and emotion recognition. Results Thresholds for emotion recognition were approximately 10 dB better than word recognition thresholds, and slopes for emotion recognition were half of those measured for word recognition. Low-arousal emotions had poorer thresholds and shallower slopes than high-arousal emotions, suggesting greater confusion when distinguishing low-arousal emotional speech content. Conclusions Communication of a talker's emotional state continues to be perceptible to listeners in competitive listening environments, even after words are rendered inaudible. The arousal of emotional speech affects listeners' ability to discriminate between emotion categories.


Assuntos
Percepção da Fala , Emoções , Humanos , Idioma , Ruído , Psicoacústica , Adulto Jovem
11.
J Am Acad Audiol ; 32(1): 10-15, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33321538

RESUMO

BACKGROUND: Older adults with hearing loss often report difficulty understanding British-accented speech, such as in television or movies, after having understood such speech in the past. A few studies have examined the intelligibility of various United States regional and non-U.S. varieties of English for American listeners, but only for young adults with normal hearing. PURPOSE: This preliminary study sought to determine whether British-accented sentences were less intelligible than American-accented sentences for American younger and older adults with normal hearing and for older adults with hearing loss. RESEARCH DESIGN: A mixed-effects design, with talker accent and listening condition as within-subjects factors and listener group as a between-subjects factor. STUDY SAMPLE: Three listener groups consisting of 16 young adults with normal hearing, 15 older adults with essentially normal hearing, and 22 older adults with sloping sensorineural hearing loss. DATA COLLECTION AND ANALYSIS: Sentences produced by one General American English speaker and one British English speaker were presented to listeners at 70 dB sound pressure level in quiet and in babble. Signal-to-noise ratios for the latter varied among the listener groups. Responses were typed into a textbox and saved on each trial. Effects of accent, listening condition, and listener group were assessed using linear mixed-effects models. RESULTS: American- and British-accented sentences were equally intelligible in quiet, but intelligibility in noise was lower for British-accented sentences than American-accented sentences. These intelligibility differences were similar for all three groups. CONCLUSION: British-accented sentences were less intelligible than those produced by an American talker, but only in noise.


Assuntos
Perda Auditiva , Percepção da Fala , Idoso , Humanos , Idioma , Ruído , Fonética , Estados Unidos , Adulto Jovem
12.
J Speech Lang Hear Res ; 63(4): 1083-1092, 2020 04 27.
Artigo em Inglês | MEDLINE | ID: mdl-32259460

RESUMO

Purpose This preliminary investigation compared effects of time compression on intelligibility for male versus female talkers. We hypothesized that time compression would have a greater effect for female talkers. Method Sentence materials from four talkers (two males) were time compressed, and original-speed and time-compressed speech materials were presented in a background of 12-talker babble to young adult listeners with normal hearing. Each talker/processing condition was heard by eight listeners (total N = 64). Generalized linear mixed-effects models were used to determine the effects of and interaction between processing condition and talker sex on keyword intelligibility. Additional post hoc analyses examined whether processing condition effects were related to talker vowel space and word frequency. Results For original-speed sentences, female and male talkers were essentially equally intelligible. Time compression reduced intelligibility for all talkers, but the effect was significantly greater for the female talkers. Supplementary analyses revealed that the effect of time compression depended on both talker vowel space and word frequency: The detrimental effect decreased significantly as word frequency and vowel space increased. Word frequency effects were also greater overall for talkers with larger vowel spaces. Conclusions While the small talker sample limits conclusions about the effects of talker sex, the secondary analyses suggest that intelligibility of talkers with larger vowel spaces is less susceptible to the negative effect of time compression, especially for high-frequency words.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Feminino , Humanos , Idioma , Masculino , Adulto Jovem
13.
J Speech Lang Hear Res ; 62(11): 4015-4029, 2019 11 22.
Artigo em Inglês | MEDLINE | ID: mdl-31652413

RESUMO

Purpose Emotion classification for auditory stimuli typically employs 1 of 2 approaches (discrete categories or emotional dimensions). This work presents a new emotional speech set, compares these 2 classification methods for emotional speech stimuli, and emphasizes the need to consider the entire communication model (i.e., the talker, message, and listener) when studying auditory emotion portrayal and perception. Method Emotional speech from male and female talkers was evaluated using both categorical and dimensional rating methods. Ten young adult listeners (ages 19-28 years) evaluated stimuli recorded in 4 emotional speaking styles (Angry, Calm, Happy, and Sad). Talker and listener factors were examined for potential influences on emotional ratings using categorical and dimensional rating methods. Listeners rated stimuli by selecting an emotion category, rating the activation and pleasantness, and indicating goodness of category fit. Results Discrete ratings were generally consistent with dimensional ratings for speech, with accuracy for emotion recognition well above chance. As stimuli approached dimensional extremes of activation and pleasantness, listeners were more confident in their category selection, indicative of a hybrid approach to emotion classification. Female talkers were rated as more activated than male talkers, and female listeners gave higher ratings of activation compared to male listeners, confirming gender differences in emotion perception. Conclusion A hybrid model for auditory emotion classification is supported by the data. Talker and listener factors, such as gender, were found to impact the ratings of emotional speech and must be considered alongside stimulus factors in the design of future studies of emotion.


Assuntos
Comportamento , Emoções , Fala , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
14.
J Speech Lang Hear Res ; 61(1): 159-173, 2018 01 22.
Artigo em Inglês | MEDLINE | ID: mdl-29270637

RESUMO

Purpose: The purpose of this study is to examine talker differences for subjectively rated speech clarity in clear versus conversational speech, to determine whether ratings differ for young adults with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners), and to explore effects of certain talker characteristics (e.g., gender) on perceived clarity. Relationships among clarity ratings and other speech perceptual and acoustic measures were also explored. Method: Twenty-one YNH and 15 OHI listeners rated clear and conversational sentences produced by 41 talkers on a scale of 1 (lowest possible clarity) to 7 (highest possible clarity). Results: While clarity ratings varied significantly among talkers, listeners rated clear speech significantly clearer than conversational speech for all but 1 talker. OHI and YNH listeners gave similar ratings for conversational speech, but ratings for clear speech were significantly higher for OHI listeners. Talker gender effects differed for YNH and OHI listeners. Ratings of clear speech varied among subgroups of talkers with different amounts of experience talking to people with hearing loss. Conclusions: Perceived clarity varies widely among talkers, but nearly all produce clear speech that sounds significantly clearer than their conversational speech. Few differences were seen between OHI and YNH listeners except the effect of talker gender.


Assuntos
Envelhecimento/psicologia , Perda Auditiva/psicologia , Percepção da Fala , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Limiar Auditivo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Psicolinguística , Fatores Sexuais , Fala , Adulto Jovem
15.
J Speech Lang Hear Res ; 60(8): 2271-2280, 2017 08 16.
Artigo em Inglês | MEDLINE | ID: mdl-28687824

RESUMO

Purpose: In this study, we investigated the emotion perceived by young listeners with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners) when listening to speech produced conversationally or in a clear speaking style. Method: The first experiment included 18 YNH listeners, and the second included 10 additional YNH listeners along with 20 OHI listeners. Participants heard sentences spoken conversationally and clearly. Participants selected the emotion they heard in the talker's voice using a 6-alternative, forced-choice paradigm. Results: Clear speech was judged as sounding angry and disgusted more often and happy, fearful, sad, and neutral less often than conversational speech. Talkers whose clear speech was judged to be particularly clear were also judged as sounding angry more often and fearful less often than other talkers. OHI listeners reported hearing anger less often than YNH listeners; however, they still judged clear speech as angry more often than conversational speech. Conclusions: Speech spoken clearly may sound angry more often than speech spoken conversationally. Although perceived emotion varied between YNH and OHI listeners, judgments of anger were higher for clear speech than conversational speech for both listener groups. Supplemental Materials: https://doi.org/10.23641/asha.5170717.


Assuntos
Emoções , Perda Auditiva/psicologia , Julgamento , Percepção da Fala , Adolescente , Adulto , Idoso , Feminino , Humanos , Modelos Lineares , Masculino , Testes Neuropsicológicos , Psicolinguística , Percepção Social , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA