Problem with depth-image
See original GitHub issueHello, I try to train DenseFusion on my own dataset. But I seem to have a problem with the depth-image and the pointcloud which is generated from it by
cam_scale = 1.0
pt2 = depth_masked / cam_scale
pt0 = (ymap_masked - self.cam_cx) * pt2 / self.cam_fx
pt1 = (xmap_masked - self.cam_cy) * pt2 / self.cam_fy
cloud = np.concatenate((pt0, pt1, pt2), axis=1)
cloud = cloud / 1000.0
My depth image is in mm 16 bit (synthesized with NDDS). A projection of the point-cloud into the color-image: As you can see the objectshape seems to be correct, but the pose is wrong. I checked the corresponding depth image, there the pose is correct.
I also have a question about the values in the array from depth-image. In the code we take the depth-image as an array with depth = np.array(Image.open(self.list_depth[index]))
. If I print the values of my own depth-image it starts with 1100, which should be the depth in mm. But if I print the same values for a depth-image from linemod the array starts with 0, which would mean a depth of 0 mm.
Can you tell me why?
Issue Analytics
- State:
- Created 3 years ago
- Comments:9
Top GitHub Comments
I didn’t use a specific .mat file. In my
dataset.py
I just read the informations out of the .json file:with this code you get the pose in correct units to train densefusion with linemod-code.
if depth map is in meters, why are they dividing by 1000 ? @huckl3b3rry87 @akeaveny @KatharinaSchmidt . My mesh are in meters so i removed the /1000, should i bring it back ?