unable to test the learnt model because customed_goal_sampler is not loaded
See original GitHub issueHi, when I was running run_goal_conditioned_policy.py
, an error occurred saying custom_goal_sampler is None. And I checked that VAEWrappedEnv.__getstate__ ignored its _custom_goal_sampler. Then there is no data load while the object is loaded from .pkl file.
Is there any method to make the goal sampling work while testing the trained model?
Issue Analytics
- State:
- Created 4 years ago
- Comments:5
Top Results From Across the Web
Help, my machine learning model is not learning! | by Hilarie Sit
A clear sign that your model is not learning is when it returns the same predictions for all inputs. Other times, the model...
Read more >The Model Performance Mismatch Problem (and what to do ...
The procedure when evaluating machine learning models is to fit and evaluate them on training data, then verify that the model has good...
Read more >Unable to make prediction after loading sklearn model
I have created a ML model with Scikit-Learn and saved it. Now when I load the model, I have trouble with transformation and...
Read more >Problems in Machine Learning Models? Check your Data First
It is often said that Machine Learning algorithms will get burst wide open if you do not clean your data and preprocess it....
Read more >Lecture 3: Troubleshooting & Testing - Full Stack Deep Learning
Principles for testing software, tools for testing Python code, practices for debugging models and testing ML.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Good question! Yeah, this part of the code is a bit “untyped”/undocumented. Based on
VAEWrappedEnv.samples_goals
, the signature should bewhere
dict_of_goals
is something like:Yes, so in that experiment, during autonomous exploration, we sample goals from the replay buffer since the policy doesn’t have access to any oracle sampler. However, for testing you might want to have some “oracle goal sampler” and you can implement that using the custom goal sampler.