Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 4 de 4
1.
Article En | MEDLINE | ID: mdl-37022015

Most systems for simulating sound propagation in a virtual environment for interactive applications use ray- or path-based models of sound. With these models, the "early" (low-order) specular reflection paths play a key role in defining the "sound" of the environment. However, the wave nature of sound, and the fact that smooth objects are approximated by triangle meshes, pose challenges for creating realistic approximations of the reflection results. Existing methods which produce accurate results are too slow to be used in most interactive applications with dynamic scenes. This paper presents a method for reflections modeling called spatially sampled near-reflective diffraction (SSNRD), based on an existing approximate diffraction model, Volumetric Diffraction and Transmission (VDaT). The SSNRD model addresses the challenges mentioned above, produces results accurate to within 1-2 dB on average compared to edge diffraction, and is fast enough to generate thousands of paths in a few milliseconds in large scenes. This method encompasses scene geometry processing, path trajectory generation, spatial sampling for diffraction modeling, and a small deep neural network (DNN) to produce the final response of each path. All steps of the method are GPU-accelerated, and NVIDIA RTX real-time ray tracing hardware is used for spatial computing tasks beyond just traditional ray tracing.

2.
J Acoust Soc Am ; 148(4): 1922, 2020 Oct.
Article En | MEDLINE | ID: mdl-33138484

Convincing simulation of diffraction around obstacles is critical in modeling sound propagation in virtual environments. Due to the computational complexity of large-scale wavefield simulations, ray-based models of diffraction are used in real-time interactive multimedia applications. Among popular diffraction models, the Biot-Tolstoy-Medwin (BTM) edge diffraction model is the most accurate, but it suffers from high computational complexity and hence is difficult to apply in real time. This paper introduces an alternative ray-based approach to approximating diffraction, called Volumetric Diffraction and Transmission (VDaT). VDaT is a volumetric diffraction model, meaning it performs spatial sampling of paths along which sound can traverse the scene around obstacles. VDaT uses the spatial sampling results to estimate the BTM edge-diffraction amplitude response and path length, with a much lower computational cost than computing BTM directly. On average, VDaT matches BTM results within 1-3 dB over a wide range of size scales and frequencies in basic cases, and VDaT can handle small objects and gaps better than comparable state-of-the-art real-time diffraction implementations. A GPU-parallelized implementation of VDaT is shown to be capable of simulating diffraction on thousands of direct and specular reflection path segments in small-to-medium-size scenes, within strict real-time constraints and without any precomputed scene information.

3.
IEEE Access ; 7: 162083-162101, 2019.
Article En | MEDLINE | ID: mdl-32547893

Hearing loss is one of the most common conditions affecting older adults worldwide. Frequent complaints from the users of modern hearing aids include poor speech intelligibility in noisy environments and high cost, among other issues. However, the signal processing and audiological research needed to address these problems has long been hampered by proprietary development systems, underpowered embedded processors, and the difficulty of performing tests in real-world acoustical environments. To facilitate existing research in hearing healthcare and enable new investigations beyond what is currently possible, we have developed a modern, open-source hearing research platform, Open Speech Platform (OSP). This paper presents the system design of the complete OSP wearable platform, from hardware through firmware and software to user applications. The platform provides a complete suite of basic and advanced hearing aid features which can be adapted by researchers. It serves web apps directly from a hotspot on the wearable hardware, enabling users and researchers to control the system in real time. In addition, it can simultaneously acquire high-quality electroencephalography (EEG) or other electrophysiological signals closely synchronized to the audio. All of these features are provided in a wearable form factor with enough battery life for hours of operation in the field.

4.
Article En | MEDLINE | ID: mdl-31379421

We have previously reported a realtime, open-source speech-processing platform (OSP) for hearing aids (HAs) research. In this contribution, we describe a wearable version of this platform to facilitate audiological studies in the lab and in the field. The system is based on smartphone chipsets to leverage power efficiency in terms of FLOPS/watt and economies of scale. We present the system architecture and discuss salient design elements in support of HA research. The ear-level assemblies support up to 4 microphones on each ear, with 96 kHz, 24 bit codecs. The wearable unit runs OSP Release 2018c on top of 64-bit Debian Linux for binaural HA with an overall latency of 5.6 ms. The wearable unit also hosts an embedded web server (EWS) to monitor and control the HA state in realtime. We describe three example web apps in support of typical audiological studies they enable. Finally, we describe a baseline speech enhancement module included with Release 2018c, and describe extensions to the algorithms as future work.

...