Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Psychol ; 13: 1074320, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36726519

RESUMO

Introduction: Previous research has shown that podcasts are most frequently consumed using mobile listening devices across a wide variety of environmental, situational, and social contexts. To date, no studies have investigated how an individual's environmental context might influence their attentional engagement in podcast listening experiences. Improving understanding of the contexts in which episodes of listening take place, and how they might affect listener engagement, could be highly valuable to researchers and producers working in the fields of object-based and personalized media. Methods: An online questionnaire on listening habits and behaviors was distributed to a sample of 264 podcast listeners. An exploratory factor analysis was run to identify factors of environmental context that influence attentional engagement in podcast listening experiences. Five aspects of podcast listening engagement were also defined and measured across the sample. Results: The exploratory factor analysis revealed five factors of environmental context labeled as: outdoors, indoors & at home, evenings, soundscape & at work, and exercise. The aspects of podcast listening engagement provided a comprehensive quantitative account of contemporary podcast listening experiences. Discussion: The results presented support the hypothesis that elements of a listener's environmental context can influence their attentional engagement in podcast listening experiences. The soundscape & at work factor suggests that some listeners actively choose to consume podcasts to mask disturbing stimuli in their surrounding soundscape. Further analysis suggested that the proposed factors of environmental context were positively correlated with the measured aspects of podcast listening engagement. The results are highly pertinent to the fields of podcast studies, mobile listening experiences, and personalized media, and provide a basis for researchers seeking to explore how other forms of listening context might influence attentional engagement.

2.
J Acoust Soc Am ; 145(4): 2770, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31046323

RESUMO

The spatial high-frequency extrapolation method extrapolates low-frequency band-limited spatial room impulse responses (SRIRs) to higher frequencies based on a frame-by-frame time/frequency analysis that determines directional reflected components within the SRIR. Such extrapolation can be used to extend finite-difference time domain (FDTD) wave propagation simulations, limited to only relatively low frequencies, to the full audio band. For this bandwidth extrapolation, a boundary absorption weighting function is proposed based on a parametric approximation of the energy decay relief of the SRIR used as the input to the algorithm. Results using examples of both measured and FDTD simulated impulse responses demonstrate that this approach can be applied successfully to a range of acoustic spaces. Objective measures show a close approximation to reverberation time and acceptable early decay time values. Results are verified through accompanying auralizations that demonstrate the plausibility of this approach when compared to the original reference case.

3.
J Voice ; 32(2): 130-142, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28647430

RESUMO

OBJECTIVE: Soprano singers face a number of specific challenges when singing vowels at high frequencies, due to the wide spacing of harmonics in the voice source. The varied and complex techniques used to overcome these are still not fully understood. Magnetic resonance imaging (MRI) has become increasingly popular in recent years for singing voice analysis. This study proposes a new protocol using three-dimensional MRI to investigate the articulatory parameters relevant to resonance tuning, a technique whereby singers alter their vocal tract to shift its resonances nearer to a voice source harmonic, increasing the amplitude of the sound produced. METHODS: The protocol was tested with a single soprano opera singer. Drawing on previous MRI studies, articulatory measurements from three-dimensional MRI images were compared to vocal tract resonances measured directly using broadband noise excitation. The suitability of the protocol was assessed using statistical analysis. RESULTS: No clear linear relationships were apparent between articulatory characteristics and vocal tract resonances. The results were highly vowel dependent, showing different patterns of resonance tuning and interactions between variables. This potentially indicates a complex interaction between the vocal tract and sung vowels in soprano voices, meriting further investigation. CONCLUSIONS: The effective interpretation of MRI data is essential for a deeper understanding of soprano voice production and, in particular, the phenomenon of resonance tuning. This paper presents a new protocol that contributes toward this aim, and the results suggest that a more vowel-specific approach is necessary in the wider investigation of resonance tuning in female voices.


Assuntos
Interpretação de Imagem Assistida por Computador , Imageamento Tridimensional , Laringe/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Boca/diagnóstico por imagem , Ocupações , Fonação , Canto , Qualidade da Voz , Acústica , Feminino , Humanos , Laringe/anatomia & histologia , Laringe/fisiologia , Modelos Lineares , Pessoa de Meia-Idade , Boca/anatomia & histologia , Boca/fisiologia , Postura , Valor Preditivo dos Testes , Vibração
4.
J Voice ; 32(1): 126.e1-126.e10, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28554824

RESUMO

INTRODUCTION: At the upper end of the soprano range, singers adjust their vocal tract to bring one or more of its resonances (Rn) toward a source harmonic, increasing the amplitude of the sound; this process is known as resonance tuning. This study investigated the perception of (R1) and (R2) tuning, key strategies observed in classically trained soprano voices, which were expected to be preferred by listeners. Furthermore, different vowels were compared, whereas previous investigations have usually focused on a single vowel. METHODS: Listeners compared three synthetic vowel sounds, at four fundamental frequencies (f0), to which four tuning strategies were applied: (A) no tuning, (B) R1 tuned to f0, (C) R2 tuned to 2f0, and (D) both R1 and R2 tuned. Participants compared preference and naturalness for these strategies and were asked to identify each vowel. RESULTS: The preference and naturalness results were similar for /ɑ/, with no clear pattern observed for vowel identification. The results for /u/ showed no clear difference for preference, and only slight separation for naturalness, with poor vowel identification. The results for /i/ were striking, with strategies including R2 tuning both preferred and considered more natural than those without. However, strategies without R2 tuning were correctly identified more often. CONCLUSIONS: The results indicate that perception of different tuning strategies depends on the vowel and perceptual quality investigated, and the relationship between the formants and (f0). In some cases, formant tuning was beneficial at lower f0s than expected, based on previous resonance tuning studies.


Assuntos
Percepção Auditiva , Canto , Voz , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
5.
J Voice ; 23(1): 11-20, 2009 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-17981014

RESUMO

Physical modeling using digital waveguide mesh (DWM) models is an audio synthesis method that has been shown to produce an acoustic output in music synthesis applications that is often described as being "organic," "warm," or "intimate." This paper describes work that takes its inspiration from physical modeling music synthesis and applies it to speech synthesis through a physical modeling mesh model of the human oral tract. Oral tract shapes are found using a computational technique based on the principles of biological evolution. Essential to successful speech synthesis using this method is accurate measurements of the cross-sectional area of the human oral tract, and these are usually derived from magnetic resonance imaging (MRI). However, such images are nonideal, because of the lengthy exposure time (relative to the time of articulation of speech sounds) required, the local ambient acoustic noise associated with the MRI machine itself and the required supine position for the subject. An alternative method is described where a bio-inspired computing technique that simulates the process of evolution is used to evolve oral tract shapes. This technique is able to produce appropriate oral tract shapes for open vowels using acoustic and excitation data from two adult males and two adult females, but shapes for close vowels that are less appropriate. This technique has none of the drawbacks associated with MRI, because all it requires from the subject is an acoustic and electrolaryngograph (or electroglottograph) recording. Appropriate oral tract shapes do enable the model to produce excellent quality synthetic speech for vowel sounds, and sounds that involve dynamic oral tract shape changes, such as diphthongs, can also be synthesized using an impedance mapped technique. Efforts to improve performance by reducing mesh quantization for close vowels had little effect, and further work is required.


Assuntos
Laringe/anatomia & histologia , Modelos Biológicos , Acústica da Fala , Evolução Biológica , Simulação por Computador , Feminino , Humanos , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...