[question] [feature request] support for Dict and Tuple spaces
See original GitHub issueI want to train using two images from different cameras and an array of 1d data from a sensor. I’m passing these input as my env state. Obviously I need a cnn that can take those inputs, concatenate, and train on them. My question is how to pass these input to such a custom cnn in polocies.py. Also, I tried to pass two images and apparently dummy_vec_env.py had trouble with that.
obs = env.reset() File "d:\resources\stable-baselines\stable_baselines\common\vec_env\dummy_vec_env.py", line 57, in reset self._save_obs(env_idx, obs) File "d:\resources\stable-baselines\stable_baselines\common\vec_env\dummy_vec_env.py", line 75, in _save_obs self.buf_obs[key][env_idx] = obs ValueError: cannot copy sequence with size 2 to array axis with dimension 80
I appreciate any thoughts or examples.
Issue Analytics
- State:
- Created 5 years ago
- Reactions:9
- Comments:37 (2 by maintainers)
Top GitHub Comments
Why aren’t they supported? I also would like to pass image + scalars an input to the policy at the current stage this is not possible. I don’t know if it’s more convenient to write a code for this or just add a vector of scalar at the end of the image and then separate it later.
@radusl
You can append the “direct features” (non-image) features on e.g. last channel of the image, and pad it with zeros to match the other dimensions. Then you can use a
cnn_extractor
like one returned by this function to process the actual image with convolutions and then append it with direct features: