Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 13(1): 20271, 2023 11 20.
Artículo en Inglés | MEDLINE | ID: mdl-37985887

RESUMEN

During conversations people coordinate simultaneous channels of verbal and nonverbal information to hear and be heard. But the presence of background noise levels such as those found in cafes and restaurants can be a barrier to conversational success. Here, we used speech and motion-tracking to reveal the reciprocal processes people use to communicate in noisy environments. Conversations between twenty-two pairs of typical-hearing adults were elicited under different conditions of background noise, while standing or sitting around a table. With the onset of background noise, pairs rapidly adjusted their interpersonal distance and speech level, with the degree of initial change dependent on noise level and talker configuration. Following this transient phase, pairs settled into a sustaining phase in which reciprocal speech and movement-based coordination processes synergistically maintained effective communication, again with the magnitude of stability of these coordination processes covarying with noise level and talker configuration. Finally, as communication breakdowns increased at high noise levels, pairs exhibited resetting behaviors to help restore communication-decreasing interpersonal distance and/or increasing speech levels in response to communication breakdowns. Approximately 78 dB SPL defined a threshold where behavioral processes were no longer sufficient for maintaining effective conversation and communication breakdowns rapidly increased.


Asunto(s)
Comunicación , Percepción del Habla , Adulto , Humanos , Audición , Ruido , Habla , Percepción del Habla/fisiología
2.
J Acoust Soc Am ; 149(4): 2896, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33940874

RESUMEN

Everyday environments impose acoustical conditions on speech communication that require interlocutors to adapt their behavior to be able to hear and to be heard. Past research has focused mainly on the adaptation of speech level, while few studies investigated how interlocutors adapt their conversational distance as a function of noise level. Similarly, no study tested the interaction between distance and speech level adaptation in noise. In the present study, participant pairs held natural conversations while binaurally listening to identical noise recordings of different realistic environments (range of 53-92 dB sound pressure level), using acoustically transparent headphones. Conversations were in standing or sitting (at a table) conditions. Interlocutor distances were tracked using wireless motion-capture equipment, which allowed subjects to move closer or farther from each other. The results show that talkers adapt their voices mainly according to the noise conditions and much less according to distance. Distance adaptation was highest in the standing condition. Consequently, mainly in the loudest environments, listeners were able to improve the signal-to-noise ratio (SNR) at the receiver location in the standing condition compared to the sitting condition, which became less negative. Analytical approximations are provided for the conversational distance as well as the receiver-related speech and SNR.


Asunto(s)
Percepción del Habla , Comunicación , Humanos , Ruido , Enmascaramiento Perceptual , Relación Señal-Ruido
3.
Trends Hear ; 23: 2331216519881346, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31808369

RESUMEN

The concept of complex acoustic environments has appeared in several unrelated research areas within acoustics in different variations. Based on a review of the usage and evolution of this concept in the literature, a relevant framework was developed, which includes nine broad characteristics that are thought to drive the complexity of acoustic scenes. The framework was then used to study the most relevant characteristics for stimuli of realistic, everyday, acoustic scenes: multiple sources, source diversity, reverberation, and the listener's task. The effect of these characteristics on perceived scene complexity was then evaluated in an exploratory study that reproduced the same stimuli with a three-dimensional loudspeaker array inside an anechoic chamber. Sixty-five subjects listened to the scenes and for each one had to rate 29 attributes, including complexity, both with and without target speech in the scenes. The data were analyzed using three-way principal component analysis with a (2 3 2) Tucker3 model in the dimensions of scales (or ratings), scenes, and subjects, explaining 42% of variation in the data. "Comfort" and "variability" were the dominant scale components, which span the perceived complexity. Interaction effects were observed, including the additional task of attending to target speech that shifted the complexity rating closer to the comfort scale. Also, speech contained in the background scenes introduced a second subject component, which suggests that some subjects are more distracted than others by background speech when listening to target speech. The results are interpreted in light of the proposed framework.


Asunto(s)
Acústica , Ambiente , Percepción del Habla , Estimulación Acústica , Percepción Auditiva , Femenino , Humanos , Masculino , Modelos Teóricos
4.
J Acoust Soc Am ; 145(1): 349, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30710956

RESUMEN

Estimating the basic acoustic parameters of conversational speech in noisy real-world conditions has been an elusive task in hearing research. Nevertheless, these data are essential ingredients for speech intelligibility tests and fitting rules for hearing aids. Previous surveys did not provide clear methodology for their acoustic measurements and setups, were opaque about their samples, or did not control for distance between the talker and listener, even though people are known to adapt their distance in noisy conversations. In the present study, conversations were elicited between pairs of people by asking them to play a collaborative game that required them to communicate. While performing this task, the subjects listened to binaural recordings of different everyday scenes, which were presented to them at their original sound pressure level (SPL) via highly open headphones. Their voices were recorded separately using calibrated headset microphones. The subjects were seated inside an anechoic chamber at 1 and 0.5 m distances. Precise estimates of realistic speech levels and signal-to-noise ratios (SNRs) were obtained for the different acoustic scenes, at broadband and third octave levels. It is shown that with acoustic background noise at above approximately 69 dB SPL at 1 m distance, or 75 dB SPL at 0.5 m, the average SNR can become negative. It is shown through interpolation of the two conditions that if the conversation partners would have been allowed to optimize their positions by moving closer to each other, then positive SNRs should be only observed above 75 dB SPL. The implications of the results on speech tests and hearing aid fitting rules are discussed.


Asunto(s)
Acústica del Lenguaje , Percepción del Habla , Adolescente , Adulto , Oído/fisiología , Femenino , Movimientos de la Cabeza , Humanos , Masculino , Persona de Mediana Edad , Ruido , Relación Señal-Ruido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA