Sounds Real: Using Hardware Accelerated Real-time Ray-Tracing for Augmenting Location Dependent Audio Samples

Alexander Madzar, Linfeng Li, Hymalai Bello, Bo Zhou, Paul Lukowicz

In: International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies. International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies (UBICOMM-2021) October 3-7 Barcelona Spain ISBN 978-1-61208-886-0 IARIA 10/2021.


We present a data augmentation technique for generating location variant audio samples using ray-traced audio in virtual recreations of the real world. Hardware Audio-Based Location-Aware Systems are capable of locating audio sources in relation to mobile devices. Relevant technique in the context of location-based and person tracking in ubiquitous environments. However, this solution is limited in collecting vast data to train the machine learning model reliably. To overcome this problem, we constructed a virtual environment using the audio ray- tracing solution, NVidia VRWorks Audio in Unreal Engine 4, to simulate a real-world setting. The environmental sounds in the real-world scenario were imported into the virtual environment. This strategy could augment data for training Hardware Audio- Based Location-Aware Systems machine learning models with the necessary calibration of the unreal and real data sets. Our results show the audio ray-tracing framework could simulate real-world sound in the virtual environment to a certain extent.


Weitere Links

ubicomm_2021_1_30_10018.pdf (pdf, 6 MB )

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence