Feature: A possibility to set any mesh as the global audio listener
See original GitHub issueHey all,
This is my first contribution, so let me start by saying how amazing BabylonJS is, and thanks to all working on this project.
Regarding this feature: currently, the only possible listener for any spatial sounds in the scene is the active camera. This is fine in all cases where the active camera is actually “you”.
The problem:
In my game, the camera is positioned always at the Vector3(0, 105, 105)
and follows the player around. If the player encounters door for example, that has sound attached to it with spatial distance of 60, the camera will not be able to hear that sound (simply because it’s too far away). Right now I am solving this by creating all sounds in the same distance of it’s mesh as the camera is from player (so the sound position is mesh.position + Vector3(0, 105, 105)
). This works, if the camera is static and is not rotating with the player object.
The solution: Maybe I am naive, but by reverse engineering the audio engine, I realised the solution should be very simple.
In the file https://github.com/BabylonJS/Babylon.js/blob/d7827d78296faf574984a370dbe25a0a7b93d63f/src/Audio/audioSceneComponent.ts on line 387 we have the _afterRender
function that determines the camera’s global position and set’s the listener position.
private _afterRender() {
const scene = this.scene;
if (!this._audioEnabled || !scene._mainSoundTrack || !scene.soundTracks || (scene._mainSoundTrack.soundCollection.length === 0 && scene.soundTracks.length === 1)) {
return;
}
var listeningCamera: Nullable<Camera>;
var audioEngine = Engine.audioEngine;
if (scene.activeCameras.length > 0) {
listeningCamera = scene.activeCameras[0];
} else {
listeningCamera = scene.activeCamera;
}
if (listeningCamera && audioEngine.audioContext) {
audioEngine.audioContext.listener.setPosition(listeningCamera.globalPosition.x, listeningCamera.globalPosition.y, listeningCamera.globalPosition.z);
...
My proposal is - Add new method setListener, that would set an internal property
this.customListener`. Then simply check if customListener is set and use its absolutePosition, if it’s not set, use the camera.
private _afterRender() {
const scene = this.scene;
if (!this._audioEnabled || !scene._mainSoundTrack || !scene.soundTracks || (scene._mainSoundTrack.soundCollection.length === 0 && scene.soundTracks.length === 1)) {
return;
}
var listener: Nullable<Camera | Mesh | AbstractMesh>;
var listenerPositionKey: 'globalPosition' | 'absolutePosition' = 'globalPosition';
var audioEngine = Engine.audioEngine;
if (this.customListener && !this.customListener.isDisposed() && this.customListener.isEnabled()) {
listener = this.customListener;
listenerPositionKey = 'absolutePosition'
}
else if (scene.activeCameras.length > 0) {
listener = scene.activeCameras[0];
} else {
listener = scene.activeCamera;
}
if (listeningCamera && audioEngine.audioContext) {
audioEngine.audioContext.listener.setPosition(listener[listenerPositionKey].x, listeningCamera[listenerPositionKey].y, listeningCamera[listenerPositionKey].z);
...
I am very happy to implement this if you decide to move forward. I do not have a VR headset so I will not be able to test the code that follows this regarding cameras in VR, so a help would be needed unfortunately.
Thanks!
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (6 by maintainers)
Top GitHub Comments
correct 😃 this way we have an easy implementation while being really flexible for the user
@deltakosh You mean that the user would define it as
and then we would call this function in the
_afterRender
function ?