WebXRCamera's projection matrix is incorrect
See original GitHub issueWebXRCamera has a set of “rig cameras” that represent the views/eyes of the XR device. The world and projection matrices of those views are copied over to the rig cameras, but no projection matrix is assigned to the WebXRCamera itself. Since the WebXRCamera is the active scene camera, and many APIs use the active scene camera by default, some of those APIs (that depend on the projection matrix) don’t work correctly. For example, all the picking related APIs (scene.pick
, scene.createPickingRay
, etc.) produce unexpected results. If you explicitly pass in a rig camera to those APIs, they work as expected since the correct projection matrix is then used, but I think the behavior should be correct with the default camera when the active camera is the WebXRCamera.
I created a Playground example where tapping on the screen (for a mobile device) uses scene.createPickingRay
at screen coordinate 0,0 and places a box along the ray. It should show up in the upper left corner of the display. This works correctly when the rig camera is explicitly passed in to createPickingRay
, but does not if the default (WebXRCamera
) is used.
With WebXRCamera
: https://playground.babylonjs.com/#AC8XPN#25
With WebXRCamera.rigCameras[0]
: https://playground.babylonjs.com/#AC8XPN#28
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (6 by maintainers)
Top GitHub Comments
The projection matrix we are using is the one provided by the XR hosts itself.
As the main camera (the parent of both rig cameras) has no projection matrix defined, we are calculating it on our own. I would assume that due to parameters incorrectly set (fov?) our calculation of the projection matrix is wrong.
The simplest solution (that should work out of the box) is set the main camera’s projection matrix to be the first eye’s projection matrix. This won’t work on a split-screen emulation, but should work in the immersive session. I will submit a PR, waiting for your feedback
Closing this issue. Using the first camera’s projection matrix is the best solution. Apart from changing the way we calculate the projection matrix (which we won’t), I don’t see a different way of getting the information from the data we do have.