Understanding Spherical Projection
See original GitHub issueHello,
I’ve been trying to understand the spherical projection process. I generated the projection mask for a few scans from sequence 00, scans 0-2:



I used the following projection parameters:
proj_H = 64, proj_W = 2048, proj_fov_up = 3.0, proj_fov_down = -25.0
I can’t find an explanation for the empty rows that appear in the image. I also see them in the example image in the readme file of the repo. But when I follow the similar projection procedure (with changing the projection parameters of course) for other lidars, like Ouster 128, I don’t see this effect:

What’s the reason behind those empty lines in the projected image of Velodyne 64? Is there a better way to do the projection and get rid of them?
Thank you in advance.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:6 (3 by maintainers)
Top Results From Across the Web
SPHERICAL PROJECTIONS I Main Topics A What is a ...
A spherical projection shows where lines or planes that intersect the surface of a (hemi)sphere, provided that the lines/planes also pass through the...
Read more >Stereographic projection - Wikipedia
In mathematics, a stereographic projection is a perspective projection of the sphere, through a specific point on the sphere onto a plane (the...
Read more >Spherical Projection for Point Clouds | by Anirudh Topiwala
Spherical Projection or Front View projection is nothing but a way to represent the 3D point cloud data into 2D image data, and...
Read more >Spherical coordinates - Math Insight
Spherical coordinates determine the position of a point in three-dimensional space based on the distance ρ from the origin and two angles θ...
Read more >Mapping the sphere - Rice Math Department
the most important map in navigation, the Mercator projection, ... The maps of the sphere which are easiest to understand are the central...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Regarding the first thing:
KITTI provides only point clouds and in case of the odometry part of the data, these point clouds are modified by adding the motion of the vehicle. The rotation of the LiDAR takes some time and therefore beams are in reality not fired all at the same spatial location if the vehicle moves. This motion is accounted for and the points are transformed accordingly. That’s why you can see a spiral-like pattern of the LiDAR point cloud when you see it from above.
And that’s right. A rotating LiDAR produces exactly the right format and you can use it like this.
Again, the projection is only needed when we only have a point cloud, like in the case of KITTI odometry dataset.
Thank you. I understand better now.