question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

VTK/GLEW Error when use both habitat-lab and Mayavi

See original GitHub issue

❓ Question

I wrote a script about using RGB & depth images to reconstruct a 3D map and display it using Mayavi. However, I met an extremely weird bug. I found that I can not show the Mayavi 3D image and its screenshot after creating a habitat environment.

Command

To Reproduce

  1. Here is the reconstruction.py script:
import quaternion
import numpy as np
import random

from PIL import Image
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
from matplotlib.axes._axes import _log as matplotlib_axes_logger

from Env.habitat_dataset.NavRL_env import NavRLEnv
from habitat.config import Config

matplotlib_axes_logger.setLevel('ERROR')
np.warnings.filterwarnings('ignore')


def example_config():
    # task_config = habitat.get_config(
    #     config_paths='/home/skylark/datasets/habitat/configs/tasks/objectnav_mp3d.yaml')
    task_config = habitat.get_config(
        config_paths='/home/skylark/datasets/habitat/configs/tasks/objectnav_mp3d.yaml')
    task_config.defrost()
    task_config.DATASET.DATA_PATH = '/home/skylark/datasets/habitat/data/datasets/objectnav/mp3d/v1/val/val.json.gz'
    task_config.DATASET.DATA_PATH_DIR = '/home/skylark/datasets/habitat/data/datasets/objectnav/mp3d/v1/val/content'
    task_config.DATASET.SCENES_DIR = '/home/skylark/datasets/habitat/data/scene_datasets'
    task_config.SIMULATOR.AGENT_0.SENSORS = ['RGB_SENSOR', 'DEPTH_SENSOR', 'SEMANTIC_SENSOR']
    task_config.SIMULATOR.TURN_ANGLE = 30
    task_config.freeze()

    env_config = {
        'TASK_CONFIG': task_config,
        'RL': {
            'SUCCESS_REWARD': 2.5,
            'SLACK_REWARD': -1e-3,
            'REWARD_MEASURE': "distance_to_goal",
            'SUCCESS_MEASURE': "spl",
            'EPISODE_LENGTH': 300
        }
    }
    return env_config


class HabitatEnv:
    def __init__(self, config):
        self.config = Config(config)
        self.config.freeze()
        self.scene_path = '/home/skylark/datasets/habitat/data/scene_datasets/mp3d'

        self.eng = NavRLEnv(config=self.config)
        self.eng.reset()



def from_depth_to_xyz(depth, cam2world, hfov=90):
    """[summary]

===>input coordinate (2D)
         
          width(e.g. 640)
         o--------------> y  (second axis)
         |
 height  |
(e.g.480)↓
         x (first axis)

===>output coordinate (3D) (openGL coordinates convention)
    the robot is facing [-z] direction

x <-------o
         /|
        / |
       ↙  |
      z   |
          ↓ y


    Args:
    1. must given
        depth ([type]): assuming image rgb input shape: [480, 640]     [height, width, 3]
        cam2world ([type]): [4, 4]

    2. intrinsics
        hfov (int, optional): [description]. Defaults to 90.

    Returns:
        [type]: returns of shape [3, height*width]. -> can be reshaped [3, height, width]
    """

    height, width = depth.shape

    ## https://github.com/facebookresearch/habitat-lab/issues/474
    ## https://doc.magnum.graphics/magnum/classMagnum_1_1Math_1_1Matrix4.html#a6475b3ef155c9142b890c8133504ae9b
    K = np.array([
        [1 / np.tan(np.radians(hfov / 2.)), 0., 0., 0.],
        [0., (float(width) / height) * (1.0 / np.tan(np.radians(hfov / 2.))), 0., 0.],
        [0., 0., 1, 0],
        [0., 0., 0, 1]])

    ## calc width, height

    # Now get an approximation for the true world coordinates -- see if they make sense
    # [-1, 1] for x and [1, -1] for y as array indexing is y-down while world is y-up
    xs, ys = np.meshgrid(np.linspace(-1, 1, width), np.linspace(1, -1, height))
    depth = depth.reshape(1, height, width)
    xs = xs.reshape(1, height, width)
    ys = ys.reshape(1, height, width)

    # Unproject
    # negate depth as the camera looks along -Z
    xyzs = np.vstack((xs * depth, ys * depth, -depth, np.ones(depth.shape)))
    xyzs = xyzs.reshape(4, -1)
    xyzs_world = np.matmul(np.linalg.inv(K), xyzs)
    xyzs_world = cam2world.dot(xyzs_world)  # (4, height * width)
    xyz_world_coord = xyzs_world[0:3].reshape(3, height, width)  # (3, height, width)
    return xyz_world_coord


########### =================== test =======================  #################
import cv2
from mayavi import mlab

class Recontruction_test():
    def __init__(self):
        ## other related
        self.xyz_points = np.empty([3, 0], dtype=np.float)
        self.xyz_colors = np.empty([3, 0], dtype=np.uint8)

    def plot_3d(self, rgb, depth, index, cam2world=None, pos_rot=None, ax=None):
        # if ax is None:
        # ax = plt.subplot2grid((1, 1), (0, 0), colspan=1)
        # 删除噪点 TODO
        depth[depth < 0.15] = 0
        depth[depth >= 4.8] = 0
        if pos_rot is not None:  # (x, z, -y)
            robot_pos, robot_rot = pos_rot
        if cam2world is None:
            cam2world = from_pos_rot_to_camera_extrinsic(robot_pos, robot_rot)
        # 根据内外参以及当前一帧的 depth image 生成当前帧 depth_scan 对应的坐标
        xyzs_world = from_depth_to_xyz(depth, cam2world).reshape(3, -1)

        # 历史坐标汇总在一起 (3, xxx)
        self.xyz_points = np.concatenate([self.xyz_points, xyzs_world], axis=1)
        # 历史坐标对应的 rgb 值 (3, xxx)
        self.xyz_colors = np.concatenate([self.xyz_colors, rgb.transpose((2, 0, 1)).reshape(3, -1)], axis=1)
        # 绘制二维散点Voronoi图
        # self.plot_2d(rgb, depth, index, cam2world=cam2world)
        rgba = np.concatenate([self.xyz_colors.transpose((1, 0)), 255 * np.ones([self.xyz_colors.shape[1], 1])],
                              axis=1).astype(np.uint8)
        x, y, z = self.xyz_points
        # ax1 = plt.subplot(111, projection='3d')  # 创建一个三维的绘图工程
        # #  将数据点分成三部分画,在颜色上有区分度
        # ax1.scatter(x, y, z, c='lightblue')  # 绘制数据点
        # ax1.set_zlabel('Z')  # 坐标轴
        # ax1.set_ylabel('Y')
        # ax1.set_xlabel('X')
        # plt.show()
        pts = mlab.pipeline.scalar_scatter(x, y, z)
        pts.add_attribute(rgba, 'colors')  # assign the colors to each point
        pts.data.point_data.set_active_scalars('colors')
        g = mlab.pipeline.glyph(pts)
        # g.glyph.glyph.scale_factor = 0.05 # set scaling for all the points
        g.glyph.glyph.scale_factor = 0.1  # set scaling for all the points
        g.glyph.scale_mode = 'data_scaling_off'  # make all the points same size

        # s = np.ones_like(x) * 0.5
        # import pylab
        # mlab.points3d(x, y, z, s, colormap="copper", scale_factor=.25)
        f = mlab.gcf()
        f.scene._lift()
        arr = mlab.screenshot(figure=g, mode='rgb', antialiased=True)
        img_RGB = Image.fromarray(arr).convert('RGB')
        import pylab as pl
        # ax.imshow(img_RGB)
        # ax.axis('off')
        # ax.set_title('Reconstruction')
        # plt.pause(1)
        # ax.clear()
        # mlab.show()
        return img_RGB

if __name__ == "__main__":
    # For test the display of reconstruction and its screenshot
    import pickle
    from skimage.transform import resize

    env_config = example_config()
    env = HabitatEnv(env_config)

    recontruction = Recontruction_test()

    camera2world0 = np.eye(4)
    # fig = plt.figure(figsize=[10, 8])
    ax = plt.subplot2grid((1, 1), (0, 0), colspan=1)

    for i in range(21, 33):
        rgb = pickle.load(open("/home/skylark/PycharmRemote/ReGReT/Utils/new_data/rgb{}.pkl".format(i), 'rb'))
        depth = pickle.load(open("/home/skylark/PycharmRemote/ReGReT/Utils/new_data/depth{}.pkl".format(i), 'rb'))

        rgb = resize(rgb, (240, 320, 3))
        rgb = np.asarray(rgb) * 255
        depth = resize(depth, (240, 320))

        angle = i * 2 * np.pi / 12.
        camera2world = rotate_m(angle).dot(camera2world0)
        # recontruction.plot_2d(rgb, depth, i, cam2world=camera2world, pos_rot=[[0, -0.08, 0], None], visible_flag=True,
        #                       ax=ax)
        # plt.show()
        img_RGB = recontruction.plot_3d(rgb, depth, i, cam2world=camera2world, pos_rot=[[0, -0.08, 0], None], ax=ax)
        ax.imshow(img_RGB)
        ax.axis('off')
        ax.set_title('Reconstruction')
        plt.pause(0.1)

  1. Here is the NavRL_env.py:
from typing import Optional, Type

import habitat
from habitat import Config, Dataset
from habitat_baselines.common.baseline_registry import baseline_registry

# for top-down map vis
from habitat.utils.visualizations import maps

def get_env_class(env_name: str) -> Type[habitat.RLEnv]:
    r"""Return environment class based on name.

    Args:
        env_name: name of the environment.

    Returns:
        Type[habitat.RLEnv]: env class.
    """
    return baseline_registry.get_env(env_name)


@baseline_registry.register_env(name="NavRLEnv")
class NavRLEnv(habitat.RLEnv):
    def __init__(self, config: Config, dataset: Optional[Dataset] = None):
        self._rl_config = config.RL
        self._core_env_config = config.TASK_CONFIG
        self._reward_measure_name = self._rl_config.REWARD_MEASURE
        self._success_measure_name = self._rl_config.SUCCESS_MEASURE

        self._previous_measure = None
        self._previous_action = None
        super().__init__(self._core_env_config, dataset)

    def reset(self):
        self._previous_action = None
        observations = super().reset()
        self._previous_measure = self._env.get_metrics()[
            self._reward_measure_name
        ]
        return observations

    def step(self, *args, **kwargs):
        self._previous_action = kwargs
        return super().step(*args, **kwargs)

    def get_reward_range(self):
        return (
            self._rl_config.SLACK_REWARD - 1.0,
            self._rl_config.SUCCESS_REWARD + 1.0,
        )

    def get_reward(self, observations) -> object:
        reward = self._rl_config.SLACK_REWARD

        current_measure = self._env.get_metrics()[self._reward_measure_name]

        reward += self._previous_measure - current_measure
        self._previous_measure = current_measure

        if self._episode_success():
            reward += self._rl_config.SUCCESS_REWARD

        return reward

    def _episode_success(self):
        return self._env.get_metrics()[self._success_measure_name]

    def get_done(self, observations):
        done = False
        if self._env.episode_over or self._episode_success():
            done = True
        return done

    def get_info(self, observations):
        return self.habitat_env.get_metrics()

    # for map vis
    def get_topdown_map(self, square_map_resolution=5000):
        top_down_map = maps.get_topdown_map(
            self._env.sim, 
            map_resolution=(square_map_resolution,square_map_resolution))
        ## 0: unoccupied
        ## 1: occupied
        ## 2: border
        return top_down_map
  1. Just run the reconstruction.py, the rgb&depth images mentioned for the test in the main function can be downloaded from here.

Error Log

2020-12-19 17:12:03.419 (  35.411s) [         5EBF740]vtkOpenGLRenderWindow.c:569    ERR| vtkXOpenGLRenderWindow (0x555c3b2b3140): GLEW could not be initialized: Unknown error
2020-12-19 17:12:03.419 (  35.412s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:12:03.419 (  35.412s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:12:04.330 (  36.323s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:12:07.734 (  39.727s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:12:12.712 (  44.705s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:12:19.359 (  51.352s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:12:27.739 (  59.732s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:12:37.325 (  69.317s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:12:48.577 (  80.569s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:13:01.507 (  93.499s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:13:16.279 ( 108.271s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:13:32.934 ( 124.926s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:13:51.069 ( 143.062s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
2020-12-19 17:14:11.383 ( 163.376s) [         5EBF740]     vtkOpenGLState.cxx:1380  WARN| Hardware does not support the number of textures defined.
I1219 17:14:13.038286 19251 PhysicsManager.cpp:33] Deconstructing PhysicsManager
I1219 17:14:13.038314 19251 SemanticScene.h:41] Deconstructing SemanticScene
I1219 17:14:13.041451 19251 SceneManager.h:25] Deconstructing SceneManager
I1219 17:14:13.041463 19251 SceneGraph.h:26] Deconstructing SceneGraph
I1219 17:14:13.042546 19251 Sensor.h:81] Deconstructing Sensor
I1219 17:14:13.042873 19251 Sensor.h:81] Deconstructing Sensor
I1219 17:14:13.043201 19251 Sensor.h:81] Deconstructing Sensor
I1219 17:14:13.043211 19251 SceneGraph.h:26] Deconstructing SceneGraph
I1219 17:14:13.055938 19251 Renderer.cpp:34] Deconstructing Renderer
I1219 17:14:13.055974 19251 WindowlessContext.h:17] Deconstructing WindowlessContext
I1219 17:14:14.646723 19251 Simulator.cpp:46] Deconstructing Simulator

Process finished with exit code 0

Here is the wrong display:

image

Expected behavior

If I delete this line in the main function in reconstruction.py:

env = HabitatEnv(env_config)

Then I can get the mayavi image and its screenshot correctly

image

Thanks a lot for your help!!! I’m getting desperate!

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
erikwijmanscommented, Dec 19, 2020

We’d need to add something to habitat to allow it to piggy back off Mayavi’s context to keep both in the same process (assuming mayavi is written correctly, of course). You can work around this by using habitat.VectorEnv (https://github.com/facebookresearch/habitat-lab/blob/master/habitat/core/vector_env.py#L78) with 1 env as that’ll run the habitat instance in a separate process

0reactions
Skylark0924commented, Jan 16, 2021

Sorry for replying late, I finally use the plot3d function instead of mayavi. The solution you provided looks valuable for getting the property of one of the Env class in a VectorEnv thread. Thanks a lot and let’s close this issue!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Cannot install mayavi/vtk with pip3 install due "Failed building ...
When I try to install mayavi normally with pip3 install mayavi. it gives the error. ERRROR: Failed building wheel for mayavi.
Read more >
Quickstart | Habitat Lab Docs
Habitat Lab : A modular high-level library for end-to-end development in embodied AI — defining embodied AI tasks (e.g. navigation, instruction following, ...
Read more >
habitat-lab - bytemeta
VTK/GLEW Error when use both habitat-lab and Mayavi ... Make software development more efficient, Also welcome to join our telegram.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found