question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Question about Intrinsic Matrix and Camera Settings

See original GitHub issue

❓ Questions and Help

Hi, I have several questions about the intrinsic/camera matrix of Habitat. Feel free to let me know if this question fits Habitat-Sim more. Thanks

  1. Can I know whether the intrinsic matrix computed in demo is accurate or just an approximation?
K = np.array([
    [1 / np.tan(hfov / 2.), 0., 0., 0.],
    [0., 1 / np.tan(hfov / 2.), 0., 0.],
    [0., 0.,  1, 0],
    [0., 0., 0, 1]
])
  1. If K is an approximation, could you please tell me why to compute the intrinsic matrix in this way?

  2. If K is accurate, I deduce that Habitat uses a pinhole camera model with a screen of size [2 x height / width, 2], whose distance to the optical center is focal length f. The first element of the size is for the vertical axis while the second element is for the horizontal axis. Is this correct?

  3. My hypothesis also comes from these two images taken from the same position and orientation. The first is taken with WIDTH and HEIGHT of 640 and 360 while the latter is taken with a square size of 256. I find that their horizontal view is “same” or really similar. The difference comes from the vertical content. image1 image2

  4. However, if preceding discussions are correct, I am confused about the following code snippets. Can I know why the comment states that the world coordinates are approximations? If the world coordinate’s origin overlaps with the optical center, I think the computed 3D coordinates are accurate, right?

# Now get an approximation for the true world coordinates -- see if they make sense
# [-1, 1] for x and [1, -1] for y as array indexing is y-down while world is y-up
xs, ys = np.meshgrid(np.linspace(-1,1,W), np.linspace(1,-1,W))
depth = depths[0].reshape(1,W,W)
xs = xs.reshape(1,W,W)
ys = ys.reshape(1,W,W)

# Unproject
# negate depth as the camera looks along -Z
xys = np.vstack((xs * depth , ys * depth, -depth, np.ones(depth.shape)))
xys = xys.reshape(4, -1)
xy_c0 = np.matmul(np.linalg.inv(K), xys)

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
erikwijmanscommented, Sep 6, 2020

In OpenGL there is still an intrinsic matrix, it is just part of the perspective projection matrix. The intrinsic matrix K above is a valid intrinsic matrix, it just assumes the camera has a screen whose pixels are specified by continuous values in [-1, -1]^2. This is common in graphics as you don’t want to have to write things that are locked a specific screen size as that will be variable. That is really the only difference between the two.

0reactions
Xiaoming-Zhaocommented, Oct 19, 2020

Thanks a lot for your explanation. Once you mention the projection matrix, I understand the demo now. I think the confusion comes from the terminology of intrinsic parameters which directs me to think of the intrinsic/camera matrix. However, you are following the procedure of OpenGL.

Actually, I think there are some differences in how computer vision literature and OpenGL process the image generation. When we have points/objects in the camera/eye coordinate system, they process them in different ways:

  • For most computer vision textbook, there is only one camera/intrinsic/calibration matrix, which directly maps points in the camera coordinate system to the canvas/screen.
  • For OpenGL, there does not exist an intrinsic matrix. Instead, there are two steps:
    • A projection matrix is utilized to map points to Normalized Coordinate System (NDC)
    • Then a viewport transformation is applied to get pixel coordinates of canvas/screen

I think it may be better to change the terminology a little bit to make it clearer. I am happy to push a PR for it if you think it is necessary.

BTW, I think this blog demonstrates the relationship between the intrinsic matrix and projection matrix: https://strawlab.org/2011/11/05/augmented-reality-with-OpenGL/

Read more comments on GitHub >

github_iconTop Results From Across the Web

Camera Intrinsic Matrix with Example in Python
In this article, we'll see how the image is formed by the camera and learn about its intrinsic parameters. Projection of a point....
Read more >
Camera Parameters - Extrinsics and Intrinsics (Cyrill Stachniss)
Camera Parameters - Extrinsic and Intrinsic ParametersSlides: ...
Read more >
Intrinsic Parameter - an overview
A general camera matrix has 11 degrees of freedom (since it is only defined up to a scale factor). The camera center and...
Read more >
Question - Camera Intrinsic Matrix
Is the intrinsic matrix of the camera ? 2. Also, are the parameters of this matrix not expressed in pixels? 3. I am...
Read more >
How does resizing an image affect the intrinsic camera ...
I have a camera matrix (I know both intrinsic and extrinsic parameters) known for image of size HxW. (I use this matrix for...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found