Can't save model when using Rich Progress Bar (multiprocessing)
See original GitHub issueDescribe the bug
When using a callback to RichProgressBar I run into the following error when trying to save_model
.
TypeError: can't pickle _thread.RLock objects
Without the callback everything works as expected. Is there a workaround with e.g. state_dicts for now?
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
ipykernel_79835/4020755580.py in <module>
----> 1 model.save_model("basemodel.pth.tar")
site-packages/darts/models/forecasting/torch_forecasting_model.py in save_model(self, path)
1311 # We save the whole object to keep track of everything
1312 with open(path, "wb") as f_out:
-> 1313 torch.save(self, f_out)
1314
1315 # In addition, we need to use PTL save_checkpoint() to properly save the trainer and model
site-packages/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization)
378 if _use_new_zipfile_serialization:
379 with _open_zipfile_writer(opened_file) as opened_zipfile:
--> 380 _save(obj, opened_zipfile, pickle_module, pickle_protocol)
381 return
382 _legacy_save(obj, opened_file, pickle_module, pickle_protocol)
site-packages/torch/serialization.py in _save(obj, zip_file, pickle_module, pickle_protocol)
587 pickler = pickle_module.Pickler(data_buf, protocol=pickle_protocol)
588 pickler.persistent_id = persistent_id
--> 589 pickler.dump(obj)
590 data_value = data_buf.getvalue()
591 zip_file.write_record('data.pkl', data_value, len(data_value))
Issue Analytics
- State:
- Created a year ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
need help with multiprocessing · Issue #121 · Textualize/rich
Sorry, I am trying to use this code to have progress reporting on a very long task, but I need to use Pool.starmap()...
Read more >Show the progress of a Python multiprocessing pool ...
My personal favorite -- gives you a nice little progress bar and completion ETA while things run and commit in parallel. from multiprocessing...
Read more >Progress Display — Rich 12.6.0 documentation
Rich progress display supports multiple tasks, each with a bar and progress information. You can use this to track concurrent tasks where the...
Read more >rich progress and multiprocessing - Of Last Importance
How to use rich with python's multiprocessing: What is this? Track progress of long running tasks when using multiprocessing.
Read more >Progress Bars for Python Multiprocessing Tasks - Lei Mao
Use tqdm for Python Multiprocessing. ... Multiprocessing tasks should also have progress bars to show the progress.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@DeastinY The was the wrong method to suggest. That is what darts is doing under the hood (I’m still new around here). I read through the code and the recommended way to save is to use
save_checkpoints=True
. I created a simple example).model.save_model()
ultimately pickles the model, which can fail based on the objects added to the model. The built-in checkpointing only saves the most necessary things: hyperparameters, model weights, optimizer states, …I just want to add that the recommended way to load the model from checkpoint is (also documented here):
the same also applies to
load_model()