Question about Intrinsic Matrix and Camera Settings
See original GitHub issue❓ Questions and Help
Hi, I have several questions about the intrinsic/camera matrix of Habitat. Feel free to let me know if this question fits Habitat-Sim more. Thanks
- Can I know whether the intrinsic matrix computed in demo is accurate or just an approximation?
K = np.array([
[1 / np.tan(hfov / 2.), 0., 0., 0.],
[0., 1 / np.tan(hfov / 2.), 0., 0.],
[0., 0., 1, 0],
[0., 0., 0, 1]
])
-
If
K
is an approximation, could you please tell me why to compute the intrinsic matrix in this way? -
If
K
is accurate, I deduce that Habitat uses a pinhole camera model with a screen of size[2 x height / width, 2]
, whose distance to the optical center is focal lengthf
. The first element of the size is for the vertical axis while the second element is for the horizontal axis. Is this correct? -
My hypothesis also comes from these two images taken from the same position and orientation. The first is taken with
WIDTH
andHEIGHT
of 640 and 360 while the latter is taken with a square size of 256. I find that their horizontal view is “same” or really similar. The difference comes from the vertical content. -
However, if preceding discussions are correct, I am confused about the following code snippets. Can I know why the comment states that the world coordinates are approximations? If the world coordinate’s origin overlaps with the optical center, I think the computed 3D coordinates are accurate, right?
# Now get an approximation for the true world coordinates -- see if they make sense
# [-1, 1] for x and [1, -1] for y as array indexing is y-down while world is y-up
xs, ys = np.meshgrid(np.linspace(-1,1,W), np.linspace(1,-1,W))
depth = depths[0].reshape(1,W,W)
xs = xs.reshape(1,W,W)
ys = ys.reshape(1,W,W)
# Unproject
# negate depth as the camera looks along -Z
xys = np.vstack((xs * depth , ys * depth, -depth, np.ones(depth.shape)))
xys = xys.reshape(4, -1)
xy_c0 = np.matmul(np.linalg.inv(K), xys)
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (2 by maintainers)
Top GitHub Comments
In OpenGL there is still an intrinsic matrix, it is just part of the perspective projection matrix. The intrinsic matrix
K
above is a valid intrinsic matrix, it just assumes the camera has a screen whose pixels are specified by continuous values in [-1, -1]^2. This is common in graphics as you don’t want to have to write things that are locked a specific screen size as that will be variable. That is really the only difference between the two.Thanks a lot for your explanation. Once you mention the projection matrix, I understand the demo now. I think the confusion comes from the terminology of intrinsic parameters which directs me to think of the intrinsic/camera matrix. However, you are following the procedure of OpenGL.
Actually, I think there are some differences in how computer vision literature and OpenGL process the image generation. When we have points/objects in the camera/eye coordinate system, they process them in different ways:
I think it may be better to change the terminology a little bit to make it clearer. I am happy to push a PR for it if you think it is necessary.
BTW, I think this blog demonstrates the relationship between the intrinsic matrix and projection matrix: https://strawlab.org/2011/11/05/augmented-reality-with-OpenGL/