question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Question] preload replay buffer

See original GitHub issue

Hi I am using SAC for controlling motion of a ship. My understanding is that I can use results with other controller e.g PID controller to help to make SAC learn faster.

The way I implemented this is, as follows:

model = SAC(
            MlpPolicy,
            env,
            verbose=1,
            learning_rate=lr,
            buffer_size=buffer_size,
            batch_size=batch_size,
            gradient_steps=1,
            learning_starts=int(Nsteps * RLsets["exploration_episodes"]),
            tau=1.0e-2,
            gamma=0.99,
            seed=random_seed,
            use_sde=True,
            use_sde_at_warmup=True,
            policy_kwargs={"net_arch": NetLayers,
                           "activation_fn": torch.nn.ReLU,
                           "log_std_init": -1., },
        )

# get results of PID controller
motions, actions = get_from_simulation(sim_PID)
rewards = compute_reward(motions)
for step in motions:
    model.replay_buffer.observations[step] = motions[step]
    model.replay_buffer.actions[step] = actions[step]
    model.replay_buffer.rewards[step] = rewards[step]

model.learn( total_timesteps=int(time_steps), log_interval=10, callback=callback, )

I was wondering if this is the correct way or there is a better way to achieve this?

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
araffincommented, Jan 5, 2021

To complement the answer, you may also take a look at all the “Offline RL” litterature, so learning from a fixed dataset (that may come from an expert controller).

For instance, in the AWAC paper, they show that pre-training with behavior cloning (i.e. using expert trajectories and then continue training) does not necessary help.

A good overview of the challenges of offline algorithms like SAC with such data can be found in the BQC paper.

To get started: https://github.com/takuseno/d3rlpy/ (there is a SB3 wrapper in that repo)

1reaction
RafieeAshkancommented, Jan 5, 2021

Many thanks for very helpful comments.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Question / Help - replay buffer not responding
at some days i run the replay buffer in the background for multiple hours and somtimes it just stops responding, meaning its not...
Read more >
Video doesn't preload, error Chrome and Firefox
I tried to put the loading of the video at the beginning of my code, but it's doesn't work. If I can't recover...
Read more >
FAQ - Tcpreplay - AppNeta
Frequently Asked Questions. ... When tcpreplay sends a packet, it actually gets copied to a send buffer in the kernel. If this buffer...
Read more >
Algorithms — Ray 2.2.0
This enables us to preload data into these stacks while another stack is performing gradient calculations. ... replay_buffer_config – Replay buffer config.
Read more >
Replay Buffer - OBS Classic - Help Files
This option allows you to save the last X seconds of Video and Audio to your disk on the press of a button....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found