"timeout or by a memory leak.", UserWarning
See original GitHub issueSystem and versions
Mac OS 10.14.4 Python 3.6 Joblib 0.12.5
Usage
I have 32 big batches, and each batch contains unequal number of works. I am using joblib within a loop for the big batch, and the joblib function is to run the works within one batch at the same time. The errors come when I run the loop. However, when I run the batch one by one, there is no such error.
pseudo code:
results_list=[]
for batch in batch_list:
results=Parallel(n_jobs=num_cores,verbose=1)(delayed(func)(x[i]) for i in batch )))
result_list.append(results)
Errors analysis
/Users/x/miniconda3/lib/python3.6/site-packages/joblib/externals/loky/process_executor.py:700: UserWarning: A worker stopped while some jobs were given to the executor. This can be caused by a too short worker timeout or by a memory leak.
"timeout or by a memory leak.", UserWarning
- I have tried gc.collection() to release the memory. But it does not help.
- I checked the total memory, and the memory is only 10% of the total I have.
- When I change the worker number to a smaller number (from 20 to 10), more warning comes.
- During the loop, the chance and frequency for the warning to show is larger if that batch has more works inside.
- Based on my previous validation on some cases, even with this warning, the results look the same as the one generated without warning (when I do it individually)
Issue Analytics
- State:
- Created 4 years ago
- Comments:16 (4 by maintainers)
Top Results From Across the Web
A worker stopped while some jobs were given to the executor ...
UserWarning: A worker stopped while some jobs were given to the executor. This can be caused by a too short worker timeout or...
Read more >Memory leak using gridsearchcv - scikit learn - Stack Overflow
This can be caused by a too short worker timeout or by a memory leak. "timeout or by a memory leak.", UserWarning.
Read more >Grid Search versus Random Grid Search - Kaggle
This can be caused by a too short worker timeout or by a memory leak. "timeout or by a memory leak.", UserWarning [Parallel(n_jobs=4)]:...
Read more >"timeout or by a memory leak.", UserWarning - Bountysource
This can be caused by a too short worker timeout or by a memory leak. "timeout or by a memory leak.", UserWarning.
Read more >memory leak and fails to complete on large dataset
py:702: UserWarning: A worker stopped while some jobs were given to the executor. This can be caused by a too short worker timeout...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
This happens to me often when my workers need to launch memory intensive tasks, the typical case is when each worker needs to fit a tensorflow/keras neural network or a catboost model that hit the RAM hard. Particularly tensorflow seems to be a bit nasty giving memory back after it is done. This has never caused any issue for me apart from the warning message.
When I use ‘multipleprocessing’ as backend, this warning seems gone.