[Feature discussion] Improving model save format
See original GitHub issueContinuation on the discussion in #312 related to improving save file format.
Issue with current way of storing/loading models
- Cloudpickle can have issues with different version of Python (mentioned here)
- Cloudpickle itself mentions that it shouldn’t be used for long-term storage (something RL people might want).
- Since everything is stored into one serialized file, a single corrupted part prevents easily loading rest of the things. Example: My saved model had Tensorflow code that did not work on different Python version, thus resulting error upon loading. I only needed the model parameters, but I couldn’t load them because one of the class items broke deserialization.
- A minor point, but cloudpickle is Python-specific, so accessing anything inside these files requires going through Python+ cloudpickle.
Suggestion
Using Python-native Zipfile to manually construct a zipped archive. Each entry in the zip file would be a separately serialized object. E.g. A2C would have one entry for “gamma”, another for “n_steps” and so on. Finally the zip file would include one more entry for the model parameters, serialized with numpy’s savez
or just with pickle as is done now.
Since many learning algorithms store policies, obs/action spaces, functions and other custom Python objects, it might make sense to store these using pickle. However now if these become non-deserializable (e.g. new Python version), we can still load other items. This works especially well with load_parameters
, which could directly access the parameters in zip file rather than trying to deserialize the whole file.
An alternative would be to use JSON to serialize class parameters, but this may not play well with custom functions and the likes. JSON would offer very human-readable and long-lasting files, though. I also took a look at jsonpickle, but I am not sure if relying on another library is a better idea than sticking with pickle for serializing things.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:10
Top GitHub Comments
@AdamGleave
I agree on both points. Hence I would just cloudpickle/pickle the non-JSON-able objects and leave it at that.
One open point is the storing of model parameters: Should this be Numpy
savez
object or something more universal? I figuresavez
(andsave
) format is long-lasting and easy enough for people to use the values somewhere else-I did quick experiments with storing objects with this new format, and here is the JSON (class parameters) part of the saved model. Objects that are serialized with cloudpickle (
:serialized:
) also include first-level members of the object so human reader can get some idea what the serialization contains. These are not used in any way when reading the file (only:serialized:
is read and deserialized).Example JSON file of class parameters
If we encode the bytes of non-JSON-able objects into a string and save it in the JSON, then other languages can still read the JSON file all nice but can’t really do much with these byte-strings (they do not have any meaning outside Python/Cloudpickle/Pickle).
Did a quick run of what can be JSONiable and what needs pickling. As expected, spaces and policy are the main issue as they contain classes and functions. However, in my opinion, these can stay byte-serialized as they only have proper meaning in Python. These are also the parts that will likely break something when changing Python or library versions.
One thing we can do with this much like Keras does: When loading model, we can replace some of the items from a file with something else. E.g.
A2C.load(..., custom_objects={'policy': other_policy})
would skip readingpolicy
from the file and instead uses the provided oneother_policy
. This is already inload
function but I would move it bit deeper so that it could prevent loading invalid objects (and thus crashing).