question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Question] Justifying advantage normalization for PPO

See original GitHub issue

Question

For PPO, I understand that advantage normalization (for each batch of experiences) is sort of a standard practice. I’ve seen other implementations do it, too. However, I find it a little un-justified and here’s why.

If we are using GAE, then each advantage is a weighted sum of a whole bunch of td deltas: r+gamma * V(s')-V(s). Suppose most of these deltas are positive (which is not an unreasonable assumption, especially when training is going well, i.e., when the action taken is increasingly better than the “average action”), then advantages for earlier transitions would be higher than those for later transitions, simply because towards the end of episode there are less td deltas to sum.

In this case, normalizing advantages (which involves subtracting the mean) would give early transitions positive advantage and later transitions negative advantage, which might affect performance and doesn’t make sense intuitively. Also, the gist of policy gradient algorithms is that we should encourage an action with positive advantage whenever we can, and some arguments like “give the model something to encourage and something to discourage every batch of updates” is not convincing enough.

Are there stronger justifications (e.g., papers) on why advantage normalization should be used by default in SB3? Have anyone investigated the practical differences?

A more sound alternative seems to be dividing by the max or std, without subtracting the mean.

Thanks!

Context

I’ve checked this issue but it doesn’t resolve my confusion (it’s not even closed lol):

Checklist

  • I have read the documentation (required)
  • I have checked that there is no similar issue in the repo (required)

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:1
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

4reactions
zhihanyang2022commented, Jul 14, 2021

Sorry for the delay.

@araffin Yes, what I said indeed does not happen when you bootstrap correctly at the final step (I checked the code in stable-baselines3 again, which does exactly this).

But the problem persists in the case when people don’t bootstrap in the final step (in continuous control env; in episodic env, of course no bootstrap is needed when the task end gracefully). This happens when people use the one-sample return in place of the advantage. According to my knowledge, this is how most people implement their first policy gradient project (with e.g., cartpole), but it still works.

In response to this, maybe a plot would help, but I think it’s quite self-evident. Let me know what you think!

@Miffyli Regarding the empirical study you mentioned, I think it’s great. Here’s a more mathy justification for normalization of advantages (from CS258 lecture 6 slide “Critics as state-dependent baselines” for those who are interested):

  • The slides showed that subtracting an action-independent baseline from each advantage does not change the policy gradient expectation (but if it’s already biased due to bootstrapping, I think it won’t make it more biased). This means that subtracting by the mean of all advantages (which is not a function of action) is okay to do. In general, doing so with one-sample returns can reduce variance and improve performance. But maybe advantages are already sort of zero-centered (because you have subtracted by the estimated state-value already), de-meaning again may not help as much.
  • Then all that’s left is scaling down by standard deviation, which doesn’t change the sign of the resulting advantages from the previous step. This is easier to understand - keeping the same magnitude of advantages throughout training may make training more robust to hyperparameters like learning rate.

Hope it helps!

2reactions
zhihanyang2022commented, Jun 23, 2021

I’m digging into this a bit, so let’s keep this issue open and I will post what I found for future references.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Why do we normalize the discounted rewards when doing ...
2 Advantage Normalization: They mention that normalizing the advantage is a trick useful for training. It usually results in faster learning.
Read more >
Normalizing advantage estimates in PPO - Reddit
Hello everybody! One implementation detail of PPO is to normalize the advantage estimates. These are usually normalized using the mean and ...
Read more >
Policy Optimization with Advantage Regularization for Long ...
The paper proposes a modification to PPO algorithm and presents an algorithm to constrain advantage disparity in the dynamic setup.
Read more >
DNA: Proximal Policy Optimization with a Dual Network ... - arXiv
This paper explores the problem of simultaneously learning a value function and ... advantage over reward normalization for PPO or DNA.
Read more >
(PDF) How to Make Deep RL Work in Practice - ResearchGate
Published as a workshop paper at the Deep RL Workshop, NeurIPS 2020 · Figure 9: PPO performance with Learning Rate ; Schedules, Advantage...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found