How to use pre-trained pytorch models?
See original GitHub issue❓ Questions and Help
According to the tutorials, I download pre-trained pytorch models. But where should I unzip and how to specify it by codes?
First, I unzip it to /home/u/Desktop/habitat-lab/habitat_baselines/habitat_baselines_v1(1)
.
Second, I change codes deterministic=False,
into deterministic="/home/u/Desktop/habitat-lab/habitat_baselines/habitat_baselines_v1(1)/rgbd.pth",
in /home/u/Desktop/habitat-lab/habitat_baselines/agents/ppo_agents.py
Then I run command(habitat) root@c:~/Desktop/habitat-lab# python -u habitat_baselines/run.py --exp-config habitat_baselines/config/pointnav/ppo_pointnav_example.yaml --run-type eval
I get the error:
WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. 2020-10-26 15:53:53,103 config: BASE_TASK_CONFIG_PATH: configs/tasks/pointnav.yaml CHECKPOINT_FOLDER: data/new_checkpoints CHECKPOINT_INTERVAL: 50 CMD_TRAILING_OPTS: [] ENV_NAME: NavRLEnv EVAL: SPLIT: val USE_CKPT_CONFIG: True EVAL_CKPT_PATH_DIR: data/new_checkpoints FORCE_BLIND_POLICY: False LOG_FILE: train.log LOG_INTERVAL: 10 NUM_PROCESSES: 1 NUM_UPDATES: 10000 ORBSLAM2: ANGLE_TH: 0.2617993877991494 BETA: 100 CAMERA_HEIGHT: 1.25 DEPTH_DENORM: 10.0 DIST_REACHED_TH: 0.15 DIST_TO_STOP: 0.05 D_OBSTACLE_MAX: 4.0 D_OBSTACLE_MIN: 0.1 H_OBSTACLE_MAX: 1.25 H_OBSTACLE_MIN: 0.375 MAP_CELL_SIZE: 0.1 MAP_SIZE: 40 MIN_PTS_IN_OBSTACLE: 320.0 NEXT_WAYPOINT_TH: 0.5 NUM_ACTIONS: 3 PLANNER_MAX_STEPS: 500 PREPROCESS_MAP: True SLAM_SETTINGS_PATH: habitat_baselines/slambased/data/mp3d3_small1k.yaml SLAM_VOCAB_PATH: habitat_baselines/slambased/data/ORBvoc.txt RL: DDPPO: backbone: resnet50 distrib_backend: GLOO num_recurrent_layers: 2 pretrained: False pretrained_encoder: False pretrained_weights: data/ddppo-models/gibson-2plus-resnet50.pth reset_critic: True rnn_type: LSTM sync_frac: 0.6 train_encoder: True POLICY: OBS_TRANSFORMS: CENTER_CROPPER: HEIGHT: 256 WIDTH: 256 CUBE2EQ: CUBE_LENGTH: 256 HEIGHT: 256 SENSOR_UUIDS: [] WIDTH: 512 ENABLED_TRANSFORMS: () RESIZE_SHORTEST_EDGE: SIZE: 256 name: PointNavBaselinePolicy PPO: clip_param: 0.1 entropy_coef: 0.01 eps: 1e-05 gamma: 0.99 hidden_size: 512 lr: 0.00025 max_grad_norm: 0.5 num_mini_batch: 1 num_steps: 128 ppo_epoch: 4 reward_window_size: 50 tau: 0.95 use_gae: True use_linear_clip_decay: True use_linear_lr_decay: True use_normalized_advantage: True value_loss_coef: 0.5 REWARD_MEASURE: distance_to_goal SLACK_REWARD: -0.01 SUCCESS_MEASURE: spl SUCCESS_REWARD: 10.0 SENSORS: [‘RGB_SENSOR’, ‘DEPTH_SENSOR’] SIMULATOR_GPU_ID: 0 TASK_CONFIG: DATASET: CONTENT_SCENES: [‘*’] DATA_PATH: data/datasets/pointnav/habitat-test-scenes/v1/{split}/{split}.json.gz SCENES_DIR: data/scene_datasets SPLIT: train TYPE: PointNav-v1 ENVIRONMENT: ITERATOR_OPTIONS: CYCLE: True GROUP_BY_SCENE: True MAX_SCENE_REPEAT_EPISODES: -1 MAX_SCENE_REPEAT_STEPS: 10000 NUM_EPISODE_SAMPLE: -1 SHUFFLE: True STEP_REPETITION_RANGE: 0.2 MAX_EPISODE_SECONDS: 10000000 MAX_EPISODE_STEPS: 500 PYROBOT: BASE_CONTROLLER: proportional BASE_PLANNER: none BUMP_SENSOR: TYPE: PyRobotBumpSensor DEPTH_SENSOR: CENTER_CROP: False HEIGHT: 480 MAX_DEPTH: 5.0 MIN_DEPTH: 0.0 NORMALIZE_DEPTH: True TYPE: PyRobotDepthSensor WIDTH: 640 LOCOBOT: ACTIONS: [‘BASE_ACTIONS’, ‘CAMERA_ACTIONS’] BASE_ACTIONS: [‘go_to_relative’, ‘go_to_absolute’] CAMERA_ACTIONS: [‘set_pan’, ‘set_tilt’, ‘set_pan_tilt’] RGB_SENSOR: CENTER_CROP: False HEIGHT: 480 TYPE: PyRobotRGBSensor WIDTH: 640 ROBOT: locobot ROBOTS: [‘locobot’] SENSORS: [‘RGB_SENSOR’, ‘DEPTH_SENSOR’, ‘BUMP_SENSOR’] SEED: 100 SIMULATOR: ACTION_SPACE_CONFIG: v0 AGENTS: [‘AGENT_0’] AGENT_0: ANGULAR_ACCELERATION: 12.56 ANGULAR_FRICTION: 1.0 COEFFICIENT_OF_RESTITUTION: 0.0 HEIGHT: 1.5 IS_SET_START_STATE: False LINEAR_ACCELERATION: 20.0 LINEAR_FRICTION: 0.5 MASS: 32.0 RADIUS: 0.1 SENSORS: [‘RGB_SENSOR’] START_POSITION: [0, 0, 0] START_ROTATION: [0, 0, 0, 1] DEFAULT_AGENT_ID: 0 DEPTH_SENSOR: HEIGHT: 256 HFOV: 90 MAX_DEPTH: 10.0 MIN_DEPTH: 0.0 NORMALIZE_DEPTH: True ORIENTATION: [0.0, 0.0, 0.0] POSITION: [0, 1.25, 0] TYPE: HabitatSimDepthSensor WIDTH: 256 FORWARD_STEP_SIZE: 0.25 HABITAT_SIM_V0: ALLOW_SLIDING: True ENABLE_PHYSICS: False GPU_DEVICE_ID: 0 GPU_GPU: False PHYSICS_CONFIG_FILE: ./data/default.phys_scene_config.json RGB_SENSOR: HEIGHT: 256 HFOV: 90 ORIENTATION: [0.0, 0.0, 0.0] POSITION: [0, 1.25, 0] TYPE: HabitatSimRGBSensor WIDTH: 256 SCENE: data/scene_datasets/habitat-test-scenes/van-gogh-room.glb SEED: 100 SEMANTIC_SENSOR: HEIGHT: 480 HFOV: 90 ORIENTATION: [0.0, 0.0, 0.0] POSITION: [0, 1.25, 0] TYPE: HabitatSimSemanticSensor WIDTH: 640 TILT_ANGLE: 15 TURN_ANGLE: 10 TYPE: Sim-v0 TASK: ACTIONS: ANSWER: TYPE: AnswerAction LOOK_DOWN: TYPE: LookDownAction LOOK_UP: TYPE: LookUpAction MOVE_FORWARD: TYPE: MoveForwardAction STOP: TYPE: StopAction TELEPORT: TYPE: TeleportAction TURN_LEFT: TYPE: TurnLeftAction TURN_RIGHT: TYPE: TurnRightAction ANSWER_ACCURACY: TYPE: AnswerAccuracy COLLISIONS: TYPE: Collisions COMPASS_SENSOR: TYPE: CompassSensor CORRECT_ANSWER: TYPE: CorrectAnswer DISTANCE_TO_GOAL: DISTANCE_TO: POINT TYPE: DistanceToGoal EPISODE_INFO: TYPE: EpisodeInfo GOAL_SENSOR_UUID: pointgoal_with_gps_compass GPS_SENSOR: DIMENSIONALITY: 2 TYPE: GPSSensor HEADING_SENSOR: TYPE: HeadingSensor IMAGEGOAL_SENSOR: TYPE: ImageGoalSensor INSTRUCTION_SENSOR: TYPE: InstructionSensor INSTRUCTION_SENSOR_UUID: instruction MEASUREMENTS: [‘DISTANCE_TO_GOAL’, ‘SUCCESS’, ‘SPL’] OBJECTGOAL_SENSOR: GOAL_SPEC: TASK_CATEGORY_ID GOAL_SPEC_MAX_VAL: 50 TYPE: ObjectGoalSensor POINTGOAL_SENSOR: DIMENSIONALITY: 2 GOAL_FORMAT: POLAR TYPE: PointGoalSensor POINTGOAL_WITH_GPS_COMPASS_SENSOR: DIMENSIONALITY: 2 GOAL_FORMAT: POLAR TYPE: PointGoalWithGPSCompassSensor POSSIBLE_ACTIONS: [‘STOP’, ‘MOVE_FORWARD’, ‘TURN_LEFT’, ‘TURN_RIGHT’] PROXIMITY_SENSOR: MAX_DETECTION_RADIUS: 2.0 TYPE: ProximitySensor QUESTION_SENSOR: TYPE: QuestionSensor SENSORS: [‘POINTGOAL_WITH_GPS_COMPASS_SENSOR’] SOFT_SPL: TYPE: SoftSPL SPL: TYPE: SPL SUCCESS: SUCCESS_DISTANCE: 0.2 TYPE: Success SUCCESS_DISTANCE: 0.2 TOP_DOWN_MAP: DRAW_BORDER: True DRAW_GOAL_AABBS: True DRAW_GOAL_POSITIONS: True DRAW_SHORTEST_PATH: True DRAW_SOURCE: True DRAW_VIEW_POINTS: True FOG_OF_WAR: DRAW: True FOV: 90 VISIBILITY_DIST: 5.0 MAP_PADDING: 3 MAP_RESOLUTION: 1024 MAX_EPISODE_STEPS: 1000 TYPE: TopDownMap TYPE: Nav-v0 TENSORBOARD_DIR: tb TEST_EPISODE_COUNT: 2 TORCH_GPU_ID: 0 TRAINER_NAME: ppo VIDEO_DIR: video_dir VIDEO_OPTION: [‘disk’, ‘tensorboard’] /home/u/anaconda3/envs/habitat/lib/python3.6/site-packages/tensorflow-1.13.1-py3.6-linux-x86_64.egg/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’. _np_qint8 = np.dtype([(“qint8”, np.int8, 1)]) /home/u/anaconda3/envs/habitat/lib/python3.6/site-packages/tensorflow-1.13.1-py3.6-linux-x86_64.egg/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’. _np_quint8 = np.dtype([(“quint8”, np.uint8, 1)]) /home/u/anaconda3/envs/habitat/lib/python3.6/site-packages/tensorflow-1.13.1-py3.6-linux-x86_64.egg/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’. _np_qint16 = np.dtype([(“qint16”, np.int16, 1)]) /home/u/anaconda3/envs/habitat/lib/python3.6/site-packages/tensorflow-1.13.1-py3.6-linux-x86_64.egg/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’. _np_quint16 = np.dtype([(“quint16”, np.uint16, 1)]) /home/u/anaconda3/envs/habitat/lib/python3.6/site-packages/tensorflow-1.13.1-py3.6-linux-x86_64.egg/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’. _np_qint32 = np.dtype([(“qint32”, np.int32, 1)]) /home/u/anaconda3/envs/habitat/lib/python3.6/site-packages/tensorflow-1.13.1-py3.6-linux-x86_64.egg/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’. np_resource = np.dtype([(“resource”, np.ubyte, 1)]) Traceback (most recent call last): File “habitat_baselines/run.py”, line 79, in <module> main() File “habitat_baselines/run.py”, line 40, in main run_exp(**vars(args)) File “habitat_baselines/run.py”, line 75, in run_exp execute_exp(config, run_type) File “habitat_baselines/run.py”, line 60, in execute_exp trainer.eval() File “/home/u/Desktop/habitat-lab/habitat_baselines/common/base_trainer.py”, line 103, in eval self.config.EVAL_CKPT_PATH_DIR, prev_ckpt_ind File “/home/u/Desktop/habitat-lab/habitat_baselines/utils/common.py”, line 123, in poll_checkpoint_folder f"invalid checkpoint folder " f"path {checkpoint_folder}" AssertionError: invalid checkpoint folder path data/new_checkpoints
Issue Analytics
- State:
- Created 3 years ago
- Comments:33
Top GitHub Comments
Hi @KenaHemnani i am not a developer but I have some experience with habitat. I encountered your error and the issue is that newer checkpoint files also contain the config file they were trained with while the older ones do not. The PPO checkpoints are older. As a result you can use a workaround and just take the config specified by your yaml: The issue is the code in the ppo trainer that throws your error:
if self.config.EVAL.USE_CKPT_CONFIG: config = self._setup_eval_config(ckpt_dict["config"]) else: config = self.config.clone()
so it tries to get the dict but doesnt find it - instead you can do this:try: config = trainer._setup_eval_config(ckpt_dict["config"]) except: config = trainer.config.clone()
or just clone the config by default. Edit: to preempt further issues :Hi @Sunnyzhr I am not a developer but I have some experience with habitat - I am not sure if you have already solved your issues but i will try to go through them: