Soundtracing for Realtime Speech Adjustment to Environmental Conditions in 3D Simulations

Bartosz Ziółko, Tomasz Pȩdzimąż, Szymon Pałka


We present a 3D realtime audio engine which utilizes frustum tracing to create realistic audio auralization, modifying speech in architectural walkthroughs. All audio effects are computed based on both the geometrical (e.g. walls, furniture) and acoustical scene properties (e.g. materials, air attenuation). The sound changes dynamically as we change the point of perception and sound sources. The engine can be configured to use as little as 10 percent of available processing power. Our demonstration will be based on listening radio samples in rooms with similar shape, but different acoustical properties. The described system is a component of a virtual reality trainer for firefighters using Oculus Rift. It allows to conduct dialogues with victims and to locate them based on sound cues.


Cite as: Ziółko, B., Pȩdzimąż, T., Pałka, S. (2017) Soundtracing for Realtime Speech Adjustment to Environmental Conditions in 3D Simulations. Proc. Interspeech 2017, 4026-4027.


@inproceedings{Ziółko2017,
  author={Bartosz Ziółko and Tomasz Pȩdzimąż and Szymon Pałka},
  title={Soundtracing for Realtime Speech Adjustment to Environmental Conditions in 3D Simulations},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={4026--4027}
}