question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Slow sampling from expert dataset in IRL traning loop

See original GitHub issue

Hi. After implementing pre-recorded expert data with the size of 1e6. I have realized that np.random.choice with replace=Fasle is extremely slow to the point of unusable. (batch size 100)

I am wondering if it can be replaced with something faster.

Thanks for the great project.

  # Do not allow duplication!!!
  indices = np.random.choice(
      self._random_range, self._irl.batch_size, replace=False)
  self._irl.train(

P.S. I am using the dataset from Berkeley’s D4RL project.

Screenshot from 2021-06-02 19-37-23 Screenshot from 2021-06-02 19-36-55

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
ymd-hcommented, Sep 4, 2021

I did additional investigation (after PR merged).

Unlike the legacy free function (aka. np.random.choice), the recommended generator object method (aka. np.random.Generator.choice) has heuristic algorithm.

https://github.com/numpy/numpy/blob/410a89ef04a2d3c50dd2dba2ad403c872c3745ac/numpy/random/_generator.pyx#L795-L837

In my opinion, we don’t need to open a new issue at NumPy repository.

To determine the fastest method for us, we need additional study. (However, I think the study is low priority as long as the current implementation is sufficient.)

If anyone have problem with the current implementation, please feel free to tell us.

Ref: Non-repetitive random number in numpy | Stack Overflow

0reactions
majiangcommented, Aug 30, 2021

I’ve found this via @ymd-h 's blog article. Do you know if there’s numpy’s official resource about this? I’d open an issue or PR if not.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Best Practices for Dealing With Concept Drift - neptune.ai
Then, explicitly re-label those data points with the help of human experts and train the model on the curated dataset. Ensemble learning with...
Read more >
Generative Adversarial Imitation Learning - NIPS papers
Then, to evaluate imitation performance with respect to sample complexity of expert data, we sampled datasets of varying trajectory counts from the expert...
Read more >
The Essential Guide to Quality Training Data for Machine ...
When it comes to machine learning, no element is more essential than training data. This guide has everything you need to know to...
Read more >
Look Ma, No For-Loops: Array Programming With NumPy
Granted, few people would categorize something that takes 50 microseconds (fifty millionths of a second) as “slow.” However, computers might beg to differ....
Read more >
The essential guide to bootstrapping in SAS - The DO Loop
Overview: What is the bootstrap method? Recall that a bootstrap analysis enables you to investigate the sampling variability of a statistic ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found