question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Passing loguru function to multiprocessing

See original GitHub issue

When passing a logger.info into a multiprocessing.Pool, like as following:

    with multiprocessing.Pool(1) as pool:
        for my_id in range(5):
            pool.apply_async(do_something, (my_id, logger.info))
        pool.close()
        pool.join()

the following exception occurs:

Process ForkPoolWorker-1:
Traceback (most recent call last):
  File "/home/enelson/.pyenv/versions/3.7.3/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/home/enelson/.pyenv/versions/3.7.3/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/home/enelson/.pyenv/versions/3.7.3/lib/python3.7/multiprocessing/pool.py", line 110, in worker
    task = get()
  File "/home/enelson/.pyenv/versions/3.7.3/lib/python3.7/multiprocessing/queues.py", line 354, in get
    return _ForkingPickler.loads(res)
AttributeError: 'Logger' object has no attribute 'log_function'

A fully reproducible test case is here

This is using Python 3.7.3 via pyenv in Ubuntu 18.04

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:8 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
Delgancommented, Jun 11, 2019

Ok, I fixed this by refactoring the logger and removing closure functions generated by _make_log_function().

The fix will be available in the next v0.3.0 release, thanks to both of you!

2reactions
AnesBenmerzougcommented, May 31, 2019

I managed to reduce the test case to just this:

import pickle
from loguru import logger

pickle.loads(pickle.dumps(logger.info))

or this (to mimic more closely what is happening inside apply_async):

from multiprocessing import SimpleQueue
from loguru import logger

q = SimpleQueue()
q.get(q.put(logger.info))

It seems apply_async puts the given arguments pickled by a ForkingPickler in a multiprocessing.SimpleQueue object.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Code snippets and recipes for loguru
Loguru creates files using the built-in open() function, which means by ... All additional **kwargs argument are passed to the built-in open() function....
Read more >
How should I log while using multiprocessing in Python?
I just now wrote a log handler of my own that just feeds everything to the parent process via a pipe. I've only...
Read more >
loguru Documentation - Read the Docs
Logger – A logger wrapping the core logger, but which records are passed through the patcher function before being sent to the added...
Read more >
Python — Loguru, A Powerful Logging Module - Medium
It is plug-and-play and has functions such as rolling logs in multiple ways, automatically compressing log files, and regularly deleting them.
Read more >
Multiprocessing Logging in Python
Logging provides a set of convenience functions for simple logging usage. ... logging.info() and passing in the string details of the event.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found