Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Detections in own point clouds

See original GitHub issue


What is the required orientation of the extracted point clouds? I tried running the PointNet on a different extracted frustum of points, but the results were strange.

I tried to visualize the point clouds in the pickled files, and it appears that the frustum direction is oriented along the z-axis. From this I can infer that the points are in camera coordinates, rather than lidar coordinates. Can you confirm this? Does the network only work like this?

I am using a batch size of 1, with 1024 points, and do something like this:

            class_vector = np.zeros((1, 3))
            class_vector[0, 1] = 1

            point_cloud = np.zeros((1, 1024, 4), dtype=np.float32)
            indices = np.arange(0, len(frustum))

            if len(frustum) > 1024:
                choice = np.random.choice(indices, size=1024, replace=True)
                choice = np.random.choice(indices, size=1024, replace=False)

            point_cloud[0] = frustum[choice]

I randomly sample 1024 from the extracted frustum (better ways?) and feed this along with the other two placeholders for class ID and training phase. I give the point cloud in camera coordinates. In the example here, I put it as a pedestrian:

            feed_dict = {
                self.pointclouds_pl: point_cloud,
                self.one_hot_vec_pl: class_vector,
                self.is_training_pl: False,

            batch_logits, batch_centers, \
            batch_heading_scores, batch_heading_residuals, \
            batch_size_scores, batch_size_residuals = \
                    self.end_points['heading_scores'], self.end_points['heading_residuals'],
                    self.end_points['size_scores'], self.end_points['size_residuals']],

            batch_seg_prob = softmax(batch_logits)[:, :, 1]  # BxN
            batch_seg_mask = np.argmax(batch_logits, 2)  # BxN
            mask_mean_prob = np.sum(batch_seg_prob * batch_seg_mask, 1)  # B,
            mask_mean_prob = mask_mean_prob / np.sum(batch_seg_mask, 1)  # B,
            heading_prob = np.max(softmax(batch_heading_scores), 1)  # B
            size_prob = np.max(softmax(batch_size_scores), 1)  # B,
            batch_scores = np.log(mask_mean_prob) + np.log(heading_prob) + np.log(size_prob)

            filtered_frustums.append(point_cloud[batch_seg_mask == 1].astype(np.float3

However, when I tried to visualize the points belonging to the object, it doesn’t look right…

Is there some further normalization that I need to do to the point cloud? I looked into the but I can’t really tell. Am I misinterpreting what the batch_seg_mask is?

Anyhow, great work!

Issue Analytics

  • State:open
  • Created 5 years ago
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

charlesq34commented, May 31, 2018

Hi @fferroni

Yes, we are working on the KITTI rectified camera coordinate system. If you want to test on another dataset, you need to verify that the axis directions are consistent with KITTI and make sure the camera height is similar to KITTI.

btw, the links provided seem not to work anymore.

gujiaqivadincommented, Nov 5, 2019

Hi @charlesq34, thanks for the fast reply. I fixed the camera height and used the same axes and now it works pretty well 😉

Can you comment on sampling strategies for the frustum points?

  1. In cases where there are fewer points than the nb_points parameter in the network, is it better to put them all to zero, or oversample the actual points?
  2. In cases where there are substantially more points than the nb_points parameter, can you comment on optimal sampling strategies? I am doing randomly, but am wondering how you got the performance in the KITTI benchmark?

BR, Francesco

Hello, thanks for your idea of sampling. I also found that random sampling may not be the best sample strategy for point cloud with more points that KITTI. Can you recommend some sample strategy for dense point cloud? I am working on how to make it better.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Point Cloud Object Detection - Lightning Flash - Read the Docs
PointCloud Object Detection is the task of identifying 3D objects in point clouds and their associated classes and 3D bounding boxes. The current...
Read more >
3D Object Detection from Point Cloud - CS230 Deep Learning
One of the major challenges in developing a. LiDAR-based 3D object detection system stems from the fact that the point cloud data is...
Read more >
3D Object Detection from Point Cloud with Part-aware ... - arXiv
Abstract—3D object detection from LiDAR point cloud is a challenging problem in 3D scene understanding and has many practical applications.
Read more >
TANet: Robust 3D Object Detection from Point Clouds with ...
In this paper, we focus on exploring the robustness of the. 3D object detection in point clouds, which has been rarely discussed in...
Read more >
Research on 3D Point Cloud Object Detection Algorithm for ...
However, the collected point cloud data is sparse and unevenly distributed, and it lacks characterization capabilities when facing objects with missing or ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found