[question] SAC target q nets may be updated many times in each round?
See original GitHub issueThis line of code: https://github.com/hill-a/stable-baselines/blob/d288b6d911094268b17abb61dbe9ac3ca05e1ce3/stable_baselines/sac/sac.py#L464
since step
is in range(total_timesteps)
and grad_step
is nested in range(self.gradient_steps)
, then if self.gradient_steps is not 1, the target networks are updated repeatedly?
Issue Analytics
- State:
- Created 3 years ago
- Comments:5
Top Results From Across the Web
Games & Puzzles - Target
Shop Target for Games & Puzzles you will love at great low prices. Choose from Same Day Delivery, Drive Up or Order Pickup....
Read more >Industry Data: How to Identify (or confirm) Your Target Industry
2) To ask a question, please type it into the Question box to the right of your ... time to address the questions...
Read more >Patient Target Review Deadline, Upcoming Funding Deadlines, and ...
The Program Director should review the patient target and take appropriate action in EHBs by Friday, April 27. Email us with questions. Find...
Read more >Importer Frequently Asked Questions
Q : Why is CBP updating the security guidelines for C-TPAT importers? Will other enrollment sectors also be subject to new minimum-security criteria?...
Read more >SETTING TARGETS IN STUDENT LEARNING OBJECTIVES
FREQUENTLY ASKED QUESTIONS ABOUT TARGET SETTING. Many teachers in Rhode Island have been setting goals for students for a long time, so the...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Ah I see, yeah. The target update in my opinion should be dependent of the gradient updates so probably preferable to do what TD3 does.
Thanks @hartikainen for your answer.
yes, I have also experienced that. My question was more about the target network update. In DQN, it is independent of the gradient update. In TD3, there is one soft update per gradient step. For SAC, it is the same as TD3 for now, but the current code in soft q learning seems to be closer to DQN… (currently it does not change anything though because
target_update_interval=1
.