Weirdly bad results with the example scripts
See original GitHub issueHi, I’ve been running some of the experiment scripts provided with the package and I am getting weirdly bad results – mostly there is just no visible improvement at all. E.g. when running:
python experiments/dqn_2way-single-intersection.py
and plotting results using
python outputs/plot.py -f outputs/2way-single-intersection/dqn
I get the following results:
Are you getting the same results? Is this the expected behaviour with this experiment? Is it just that the hyperparameters in the script are really bad, the scenario is challenging, or is it that something in my setup is not working correctly?
As a suggestion – maybe we should run all the included experiments scripts and store the results somewhere for reference. That would enable users to make sure that everything works as intended before they start running their own experiments.
Issue Analytics
- State:
- Created a year ago
- Comments:7 (5 by maintainers)
Top Results From Across the Web
15 Examples of Bad Writing in Really GOOD Movies and Shows
Can movies and TV contain bad writing and still be good? We look at 15 examples that prove bad writing doesn't always make...
Read more >What are some good examples of terrible movie scripts? - Quora
What are some bad movies that could have worked by just having a better screenplay? ... As an example, here is the worst...
Read more >THE SIX FORMS OF BAD DIALOGUE - ScriptShadow
Melodrama is when you take emotional beats – positive or negative – and dial them up to inauthentic levels. One of the more...
Read more >What is the WORST script you have ever seen? : r/Screenwriting
I worked in development and read TONS of bad scripts. ... of creativity via ideation which then results in a narrative screenplay should...
Read more >The 10 Most Common Mistakes That Python Developers Make
For example, consider this Python function definition: ... As a result, in the above code, the IndexError exception is not being caught by...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Yes, they are generated during the training. They generally have, it depends on how fast the agent learns a good policy. There are still stochastic behavior duo to the vehicles behaviors and the flow defined in the route file.
The environment outputs a file for every run separately, so you can plot a given run with (e.g. run 1):
python outputs/plot.py -f outputs/2way-single-intersection/a3c_conn0_run1.csvBy episode I mean a “run” on the simulator until the simulation ends.