[Question] preload replay buffer
See original GitHub issueHi I am using SAC for controlling motion of a ship. My understanding is that I can use results with other controller e.g PID controller to help to make SAC learn faster.
The way I implemented this is, as follows:
model = SAC(
MlpPolicy,
env,
verbose=1,
learning_rate=lr,
buffer_size=buffer_size,
batch_size=batch_size,
gradient_steps=1,
learning_starts=int(Nsteps * RLsets["exploration_episodes"]),
tau=1.0e-2,
gamma=0.99,
seed=random_seed,
use_sde=True,
use_sde_at_warmup=True,
policy_kwargs={"net_arch": NetLayers,
"activation_fn": torch.nn.ReLU,
"log_std_init": -1., },
)
# get results of PID controller
motions, actions = get_from_simulation(sim_PID)
rewards = compute_reward(motions)
for step in motions:
model.replay_buffer.observations[step] = motions[step]
model.replay_buffer.actions[step] = actions[step]
model.replay_buffer.rewards[step] = rewards[step]
model.learn( total_timesteps=int(time_steps), log_interval=10, callback=callback, )
I was wondering if this is the correct way or there is a better way to achieve this?
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (1 by maintainers)
Top Results From Across the Web
Question / Help - replay buffer not responding
at some days i run the replay buffer in the background for multiple hours and somtimes it just stops responding, meaning its not...
Read more >Video doesn't preload, error Chrome and Firefox
I tried to put the loading of the video at the beginning of my code, but it's doesn't work. If I can't recover...
Read more >FAQ - Tcpreplay - AppNeta
Frequently Asked Questions. ... When tcpreplay sends a packet, it actually gets copied to a send buffer in the kernel. If this buffer...
Read more >Algorithms — Ray 2.2.0
This enables us to preload data into these stacks while another stack is performing gradient calculations. ... replay_buffer_config – Replay buffer config.
Read more >Replay Buffer - OBS Classic - Help Files
This option allows you to save the last X seconds of Video and Audio to your disk on the press of a button....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
To complement the answer, you may also take a look at all the “Offline RL” litterature, so learning from a fixed dataset (that may come from an expert controller).
For instance, in the AWAC paper, they show that pre-training with behavior cloning (i.e. using expert trajectories and then continue training) does not necessary help.
A good overview of the challenges of offline algorithms like SAC with such data can be found in the BQC paper.
To get started: https://github.com/takuseno/d3rlpy/ (there is a SB3 wrapper in that repo)
Many thanks for very helpful comments.