Slow sampling from expert dataset in IRL traning loop
See original GitHub issueHi.
After implementing pre-recorded expert data with the size of 1e6. I have realized that np.random.choice
with replace=Fasle
is extremely slow to the point of unusable. (batch size 100)
I am wondering if it can be replaced with something faster.
Thanks for the great project.
# Do not allow duplication!!!
indices = np.random.choice(
self._random_range, self._irl.batch_size, replace=False)
self._irl.train(
P.S. I am using the dataset from Berkeley’s D4RL project.
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Best Practices for Dealing With Concept Drift - neptune.ai
Then, explicitly re-label those data points with the help of human experts and train the model on the curated dataset. Ensemble learning with...
Read more >Generative Adversarial Imitation Learning - NIPS papers
Then, to evaluate imitation performance with respect to sample complexity of expert data, we sampled datasets of varying trajectory counts from the expert...
Read more >The Essential Guide to Quality Training Data for Machine ...
When it comes to machine learning, no element is more essential than training data. This guide has everything you need to know to...
Read more >Look Ma, No For-Loops: Array Programming With NumPy
Granted, few people would categorize something that takes 50 microseconds (fifty millionths of a second) as “slow.” However, computers might beg to differ....
Read more >The essential guide to bootstrapping in SAS - The DO Loop
Overview: What is the bootstrap method? Recall that a bootstrap analysis enables you to investigate the sampling variability of a statistic ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I did additional investigation (after PR merged).
Unlike the legacy free function (aka.
np.random.choice
), the recommended generator object method (aka.np.random.Generator.choice
) has heuristic algorithm.https://github.com/numpy/numpy/blob/410a89ef04a2d3c50dd2dba2ad403c872c3745ac/numpy/random/_generator.pyx#L795-L837
In my opinion, we don’t need to open a new issue at NumPy repository.
To determine the fastest method for us, we need additional study. (However, I think the study is low priority as long as the current implementation is sufficient.)
If anyone have problem with the current implementation, please feel free to tell us.
Ref: Non-repetitive random number in numpy | Stack Overflow
I’ve found this via @ymd-h 's blog article. Do you know if there’s numpy’s official resource about this? I’d open an issue or PR if not.