question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

"timeout or by a memory leak.", UserWarning

See original GitHub issue

System and versions

Mac OS 10.14.4 Python 3.6 Joblib 0.12.5

Usage

I have 32 big batches, and each batch contains unequal number of works. I am using joblib within a loop for the big batch, and the joblib function is to run the works within one batch at the same time. The errors come when I run the loop. However, when I run the batch one by one, there is no such error.

pseudo code:

results_list=[]
for batch in batch_list:
    results=Parallel(n_jobs=num_cores,verbose=1)(delayed(func)(x[i]) for i in batch )))
    result_list.append(results)

Errors analysis

/Users/x/miniconda3/lib/python3.6/site-packages/joblib/externals/loky/process_executor.py:700: UserWarning: A worker stopped while some jobs were given to the executor. This can be caused by a too short worker timeout or by a memory leak.
  "timeout or by a memory leak.", UserWarning
  • I have tried gc.collection() to release the memory. But it does not help.
  • I checked the total memory, and the memory is only 10% of the total I have.
  • When I change the worker number to a smaller number (from 20 to 10), more warning comes.
  • During the loop, the chance and frequency for the warning to show is larger if that batch has more works inside.
  • Based on my previous validation on some cases, even with this warning, the results look the same as the one generated without warning (when I do it individually)

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:16 (4 by maintainers)

github_iconTop GitHub Comments

4reactions
jlopezpenacommented, Oct 4, 2019

This happens to me often when my workers need to launch memory intensive tasks, the typical case is when each worker needs to fit a tensorflow/keras neural network or a catboost model that hit the RAM hard. Particularly tensorflow seems to be a bit nasty giving memory back after it is done. This has never caused any issue for me apart from the warning message.

2reactions
YubinXiecommented, Jun 4, 2019

When I use ‘multipleprocessing’ as backend, this warning seems gone.

Read more comments on GitHub >

github_iconTop Results From Across the Web

A worker stopped while some jobs were given to the executor ...
UserWarning: A worker stopped while some jobs were given to the executor. This can be caused by a too short worker timeout or...
Read more >
Memory leak using gridsearchcv - scikit learn - Stack Overflow
This can be caused by a too short worker timeout or by a memory leak. "timeout or by a memory leak.", UserWarning.
Read more >
Grid Search versus Random Grid Search - Kaggle
This can be caused by a too short worker timeout or by a memory leak. "timeout or by a memory leak.", UserWarning [Parallel(n_jobs=4)]:...
Read more >
"timeout or by a memory leak.", UserWarning - Bountysource
This can be caused by a too short worker timeout or by a memory leak. "timeout or by a memory leak.", UserWarning.
Read more >
memory leak and fails to complete on large dataset
py:702: UserWarning: A worker stopped while some jobs were given to the executor. This can be caused by a too short worker timeout...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found