question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Exploration to PointGoal Transfer

See original GitHub issue

Hi dear @devendrachaplot Thank you for open-sourcing your work! I enjoyed reading your paper “Learning To Explore Using Active Neural SLAM”. The experiment setup and usage instructions are concise and easy to follow. I was able to run the code without any issues (except import errors fixed here https://github.com/devendrachaplot/Neural-SLAM/issues/1). The environment exploration visualizations generated by pre-trained models look amazing!

I’m struggling with Exploration to PointGoal Transfer. Basically, I understand the idea that the GlobalPolicy should be fixed to output PointGoal coordinates. But I fail to implement this idea in code.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
devendrachaplotcommented, Aug 26, 2020

The hacky/easy way of getting rid of this error is just waiting till the max_episode_length in the step function:

You can add the following at L198 in env/habitat/exploration_env.py:

if self._previous_action == 0:
    if self.timestep >= args.max_episode_length:
          done = True
    else:
          done = False
    null_state = np.zeros((3, self.args.frame_height, self.args.frame_width))
    return null_state, 0., done, self.info

The better way, which will require a lot of code change, is to pass is to set done=True when self._previous_action == 0 in step function and then checking which thread has episode finished in main.py (at L371) and resetting map for a thread when it is finished. You will also need to sync this with local steps (because map get shifted between global steps). This should not change the results, it will only help in reducing the runtime.

Some more notes:

  • If you want to run the Pointgoal task with GPS+Compass (without pose noise), you need to run main.py with --noisy_actions 0, --noisy_odometry 0 and --use_pose_estimation 0 arguments
  • The pose estimator is trained with our pose noise models based on real-world data, it is not expected to work with the pose noise models implemented in Habitat currently.
  • To get the best results during evaluation, run with --map_size_cm 3000 and --global_downscaling 1 arguments. This will increase the runtime slightly.
1reaction
devendrachaplotcommented, Aug 22, 2020

Hi,

In order to transfer to the pointgoal task, you need to transform the relative pointgoal directions to map coordinates, pass these coordinates to the planner and add a check for stop action.

You will need to do the following:

  • Add the following to reset function in env/habitat/exploration_env.py:
dist, angle = obs["pointgoal"]
x = int(dist*np.cos(angle)*20.0)
y = int(dist*np.sin(angle)*20.0)
self.pg_loc = [self.map_size_cm//2//args.map_resolution + y,
               self.map_size_cm//2//args.map_resolution + x]
self.stop_next_action = 0
  • Delete L471 in exploration_env.py (goal = inputs[‘goal’]) and replace it by:
goal = [self.pg_loc[0] - gx1, self.pg_loc[1] - gy1]
if pu.get_l2_distance(start[0], goal[0], start[1], goal[1])*5 < 25:
            self.stop_next_action = 1
  • Add a check to take stop action in the step function after L208 in exploration_env.py
if self.stop_next_action == 1:
            action = 0

I might be missing a few things, let me know if the above does not work as expected.

Regarding the other questions, ‘st’ stands for spatial transformation. ‘get_grid’ and ‘F.affine_grid’ functions are used to get the parameters for the spatial transformation. You can check out the paper on spatial transformer networks (https://arxiv.org/pdf/1506.02025.pdf) and the pytorch tutorial (https://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html) to get more details.

The ‘get_new_pose_batch’ function adds the relative pose change to the previous pose to the get the current pose. It computes current pose for a batch of inputs for efficiency.

The ‘get_sim_location’ function is converting the quaternion pose in the simulator to a single orientation of the agent in the top-down view.

Hope this helps.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Learning to Explore using Active Neural SLAM - ML@CMU Blog
The Active Neural SLAM model can be directly transferred to the PointGoal task without any additional training by just changing the Global ...
Read more >
Is Mapping Necessary for Realistic ... - CVF Open Access
6. Real-World Transfer. We perform an initial exploration of our method in re- ality and deploy our learned agent on a LoCoBot with...
Read more >
The Role of Exploration for Task Transfer in Reinforcement ...
The exploration--exploitation trade-off in reinforcement learning (RL) is a well-known and much-studied problem that balances greedy action ...
Read more >
PointGoal Navigation - Papers With Code
These leaderboards are used to track progress in PointGoal Navigation ... in fact penalizes exploration), is enabled by powerful observation encoders, ...
Read more >
LEARNING TO EXPLORE USING ACTIVE ... - OpenReview
model can also be easily transferred to the PointGoal task and was the winning entry of the CVPR 2019 Habitat PointGoal Navigation Challenge....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found