How to create two indentical benchmark instances?
See original GitHub issueIt seems like the MT10/ML10 benchmarks (maybe others as well) initialize the goal positions internally every time a benchmark is called. It would be nice if there was a way to create two identical benchmark instances, for example by passing a seed to the environment as an argument.
Currently, I get:
>>> import pickle
>>> import metaworld
>>> mt10_1, mt10_2 = metaworld.MT10(), metaworld.MT10()
>>> pickle.loads(mt10_1.train_tasks[0].data)['rand_vec']
array([-0.05115513, 0.63916668, 0.02 , 0.07276333, 0.86412829,
0.12647354])
pickle.loads(mt10_2.train_tasks[0].data)['rand_vec']
>>> pickle.loads(mt10_2.train_tasks[0].data)['rand_vec']
array([ 0.09087166, 0.69323066, 0.02 , -0.0617444 , 0.81741752,
0.14243739])
Would be useful if something like the following was possible:
import pickle
import numpy as np
import metaworld
seed = 10
mt10_1, mt10_2 = metaworld.MT10(seed=seed), metaworld.MT10(seed=seed)
rand_vec_1 = pickle.loads(mt10_1.train_tasks[0].data)['rand_vec']
rand_vec_2 = pickle.loads(mt10_2.train_tasks[0].data)['rand_vec']
np.testing.assert_equal(rand_vec_1, rand_vec_2)
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:5 (5 by maintainers)
Top Results From Across the Web
Benchmark Instance - an overview | ScienceDirect Topics
The two other benchmark sets contain instances that appear to be much harder ... For all algorithms we have used the same population...
Read more >Understanding concurrent instances in AIXPRT
By setting the number of concurrent instances to a number greater than one, a tester can use multiple CPUs or GPUs to run...
Read more >The Different Types Of Benchmarking – Examples And Easy ...
Both – KPIs and benchmarks – are used to identify opportunities for improving performance, which may be where the confusion arises. Exploring the...
Read more >Selecting a diverse set of benchmark instances from a tunable ...
Both a detailed statistical evaluation and our methodology confirm that the different model parameters allow us to generate problem instances of different ......
Read more >The curse of the benchmark instances
To complement Michael's answer, which I think made the main points already: First, an obvious strategy is to separate the data set used...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@hartikainen does pickling/unpicking and/or copy.deepcopy achieve this right now? It won’t get you cross-experiment uniformity but it is a step.
@avnishn is this FR tracked in an issue?
Oh sorry about that I linked the wrong issue. I meant #147 my b.