[TRPO] Is assign_old_eq_new() called at the right place?
See original GitHub issueHi, I wonder if it is correct to sync policy weights before doing conjugate gradient. I tried to print some intermediate results and found that KL divergence is zero and importance weights are 1.
assign_old_eq_new() # set old parameter values to new parameter values
with timed("computegrad"):
*lossbefore, g = compute_lossandgrad(*args)
lossbefore = allmean(np.array(lossbefore))
g = allmean(g)
if np.allclose(g, 0):
logger.log("Got zero gradient. not updating")
else:
with timed("cg"):
stepdir = cg(fisher_vector_product, g, cg_iters=cg_iters, verbose=rank==0)
********** Iteration 3 ************ sampling done in 2.363 seconds computegrad 2018-05-29 07:00:28.264123: I tensorflow/core/kernels/logging_ops.cc:79] pi.pd[[-0.0392541401 -0.0208184626 0.00705067441]…] 2018-05-29 07:00:28.264390: I tensorflow/core/kernels/logging_ops.cc:79] oldpi.pd[[-0.0392541401 -0.0208184626 0.00705067441]…] 2018-05-29 07:00:28.274326: I tensorflow/core/kernels/logging_ops.cc:79] ratio[1 1 1…] 2018-05-29 07:00:28.275686: I tensorflow/core/kernels/logging_ops.cc:79] —kl—[0 0 0…] done in 0.038 seconds
If this is correct, please enlighten me, thank you!
Issue Analytics
- State:
- Created 5 years ago
- Comments:5
Top GitHub Comments
@pzhokhov thanks for the explanation. What I think is if you update old parameters <- new parameters before doing conjugate gradient, the KL divergence used by the fisher vector product inside conjugate gradient is always equal to 1. Thus, the KL divergence seems have no effect on conjugate gradient. Nevertheless, perhaps I have some misunderstandings about the conjugate gradient or the code. I will look into them further.
referring to @joschu for an expert explanation, but from my understanding placement is correct, based on the following
Note that because these 1) - 3) happen in a loop, we are justified to do a circular shift and move 3) before 1), which is what done in the code.
Closing this for now, please reopen if you feel like further explanation is needed