question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to create two indentical benchmark instances?

See original GitHub issue

It seems like the MT10/ML10 benchmarks (maybe others as well) initialize the goal positions internally every time a benchmark is called. It would be nice if there was a way to create two identical benchmark instances, for example by passing a seed to the environment as an argument.

Currently, I get:

>>> import pickle
>>> import metaworld
>>> mt10_1, mt10_2 = metaworld.MT10(), metaworld.MT10()
>>> pickle.loads(mt10_1.train_tasks[0].data)['rand_vec']
array([-0.05115513,  0.63916668,  0.02      ,  0.07276333,  0.86412829,
        0.12647354])
pickle.loads(mt10_2.train_tasks[0].data)['rand_vec']
>>> pickle.loads(mt10_2.train_tasks[0].data)['rand_vec']
array([ 0.09087166,  0.69323066,  0.02      , -0.0617444 ,  0.81741752,
        0.14243739])

Would be useful if something like the following was possible:

import pickle

import numpy as np
import metaworld

seed = 10
mt10_1, mt10_2 = metaworld.MT10(seed=seed), metaworld.MT10(seed=seed)
rand_vec_1 = pickle.loads(mt10_1.train_tasks[0].data)['rand_vec']
rand_vec_2 = pickle.loads(mt10_2.train_tasks[0].data)['rand_vec']
np.testing.assert_equal(rand_vec_1, rand_vec_2)

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:2
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
ryanjuliancommented, Sep 10, 2020

@hartikainen does pickling/unpicking and/or copy.deepcopy achieve this right now? It won’t get you cross-experiment uniformity but it is a step.

@avnishn is this FR tracked in an issue?

1reaction
avnishncommented, Sep 10, 2020

Oh sorry about that I linked the wrong issue. I meant #147 my b.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Benchmark Instance - an overview | ScienceDirect Topics
The two other benchmark sets contain instances that appear to be much harder ... For all algorithms we have used the same population...
Read more >
Understanding concurrent instances in AIXPRT
By setting the number of concurrent instances to a number greater than one, a tester can use multiple CPUs or GPUs to run...
Read more >
The Different Types Of Benchmarking – Examples And Easy ...
Both – KPIs and benchmarks – are used to identify opportunities for improving performance, which may be where the confusion arises. Exploring the...
Read more >
Selecting a diverse set of benchmark instances from a tunable ...
Both a detailed statistical evaluation and our methodology confirm that the different model parameters allow us to generate problem instances of different ......
Read more >
The curse of the benchmark instances
To complement Michael's answer, which I think made the main points already: First, an obvious strategy is to separate the data set used...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found