question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[TRPO] Is assign_old_eq_new() called at the right place?

See original GitHub issue

Hi, I wonder if it is correct to sync policy weights before doing conjugate gradient. I tried to print some intermediate results and found that KL divergence is zero and importance weights are 1.

        assign_old_eq_new() # set old parameter values to new parameter values
        with timed("computegrad"):
            *lossbefore, g = compute_lossandgrad(*args)
        lossbefore = allmean(np.array(lossbefore))
        g = allmean(g)
        if np.allclose(g, 0):
            logger.log("Got zero gradient. not updating")
        else:
            with timed("cg"):
                stepdir = cg(fisher_vector_product, g, cg_iters=cg_iters, verbose=rank==0)

********** Iteration 3 ************ sampling done in 2.363 seconds computegrad 2018-05-29 07:00:28.264123: I tensorflow/core/kernels/logging_ops.cc:79] pi.pd[[-0.0392541401 -0.0208184626 0.00705067441]…] 2018-05-29 07:00:28.264390: I tensorflow/core/kernels/logging_ops.cc:79] oldpi.pd[[-0.0392541401 -0.0208184626 0.00705067441]…] 2018-05-29 07:00:28.274326: I tensorflow/core/kernels/logging_ops.cc:79] ratio[1 1 1…] 2018-05-29 07:00:28.275686: I tensorflow/core/kernels/logging_ops.cc:79] —kl—[0 0 0…] done in 0.038 seconds

If this is correct, please enlighten me, thank you!

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:5

github_iconTop GitHub Comments

1reaction
zwfcrazycommented, Jun 12, 2018

@pzhokhov thanks for the explanation. What I think is if you update old parameters <- new parameters before doing conjugate gradient, the KL divergence used by the fisher vector product inside conjugate gradient is always equal to 1. Thus, the KL divergence seems have no effect on conjugate gradient. Nevertheless, perhaps I have some misunderstandings about the conjugate gradient or the code. I will look into them further.

0reactions
pzhokhovcommented, Jun 8, 2018

referring to @joschu for an expert explanation, but from my understanding placement is correct, based on the following

  1. we compute step direction via conjugate gradient using old parameters (thbefore)
  2. we try out different (geometrically decreasing) step sizes up until we find new parameters (thnew) that satisfy kl constraint
  3. we update old parameters <- new parameters

Note that because these 1) - 3) happen in a loop, we are justified to do a circular shift and move 3) before 1), which is what done in the code.

Closing this for now, please reopen if you feel like further explanation is needed

Read more comments on GitHub >

github_iconTop Results From Across the Web

No results found

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found