Future of Information and Communication Conference (FICC) 2024
4-5 April 2024
Publication Links
IJACSA
Special Issues
Future of Information and Communication Conference (FICC)
Computing Conference
Intelligent Systems Conference (IntelliSys)
Future Technologies Conference (FTC)
International Journal of Advanced Computer Science and Applications(IJACSA), Volume 12 Issue 10, 2021.
Abstract: A binaural rendering is a technology that generates a realistic sound for a user with a stereo headphone, so it is essential for the stereo headphone based virtual reality (VR) service. However, the binaural rendering has a problem that it cannot reflect the user's free movement in the VR. Because the VR sound does not match with the visual scene when the user moves freely in the VR space, the performance of the VR may be degraded. To reduce the mismatch problem in the VR, the complex plane based stereo realistic sound generation method was proposed to allow the user’s free movement in the VR causing the change of the distance and azimuth between the user and the speaker. For the calculation of the distance and the azimuth between the user and the speaker by the user’s position change, the 5.1 multichannel speaker playback system and the user are placed in the complex plane. Then, the distance and the azimuth between the user and the speaker can be simply calculated as the distance and the angle between two points in the complex plane. The 5.1 multichannel audio signals are scaled by the estimated five distances according to the inverse square law, and the scaled multichannel audio signals are mapped to the newly generated virtual 5.1 multichannel speaker layout using the measured five azimuths and the azimuth by the head movement. Finally, we can successfully obtain the stereo realistic sound to reflect the user’s position change and the head movement through the binaural rendering using the scaled and mapped 5.1 multichannel audio signals and the HRTF coefficients. Experimental results show that the proposed method can generate the realistic audio sound reflecting the user’s position and azimuth change in the VR only with less than about 5 % error rate.
Kwangki Kim, “Complex Plane based Realistic Sound Generation for Free Movement in Virtual Reality” International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021. http://dx.doi.org/10.14569/IJACSA.2021.0121043
@article{Kim2021,
title = {Complex Plane based Realistic Sound Generation for Free Movement in Virtual Reality},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2021.0121043},
url = {http://dx.doi.org/10.14569/IJACSA.2021.0121043},
year = {2021},
publisher = {The Science and Information Organization},
volume = {12},
number = {10},
author = {Kwangki Kim}
}
Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.