Watchtower + Multiprocessing
See original GitHub issueThe logger is not successfully writing to CloudWatch when using multiprocessing. I tested to see whether this was my configuration by dropping a watchtower handler and using a file handler. This logged perfectly, however, when switching back to the watchtower handler only messages before and after outputs = pool.map(worker, inputs)
worked.
Any idea how to fix this? Setting use_queues
to True
didn’t help.
Sample code:
import watchtower
import logging
from multiprocessing import Lock, Process, Queue, current_process, Manager, Pool
def worker(var):
logger.debug("Incoming variable: %s" % var)
logger.debug("Outgoing variable: %s" % (var+1))
return var+1
def main():
inputs = []
for i in xrange(1000):
inputs.append(i)
logger.debug("Starting run now!")
pool = Pool(processes=3)
outputs = pool.map(worker, inputs)
pool.close()
pool.join()
logger.debug("Just finished run")
if __name__ == "__main__":
logger = logging.getLogger("multi")
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler("test.log")
fh.setLevel(logging.DEBUG)
logger.addHandler(fh)
wt_project_handler = watchtower.CloudWatchLogHandler(stream_name="test",
use_queues=True)
wt_project_handler.setLevel(logging.DEBUG)
logger.addHandler(wt_project_handler)
main()
Issue Analytics
- State:
- Created 7 years ago
- Comments:10 (3 by maintainers)
Top Results From Across the Web
Logging with multiprocessing in Docker - python
I'm using Amazon CloudWatch for logging using the built in logging in Python with a CloudWatch handler added. My problem is that when...
Read more >watchtower - Bountysource
I'm currently facing an issue with watchtower 1.0.6 hanging on shutdown/flush and renders the whole application non-responsive.
Read more >Multiprocessing In Python: Core vs libraries
A demonstration of Python's concurrent processing and comparison to external third-party libraries like loky, ray, and pathos.
Read more >Issues with Pool and multiprocessing in Python 3
Coding example for the question Issues with Pool and multiprocessing in Python 3. ... Django Watchtower connection refused when running server localhost ...
Read more >My Top 3 Recommended Docker Tools - Packet Coders
Watch Tower. With watchtower you can update the running version of your containerized app simply by pushing a new image to the Docker...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Solved this issue by using this repo: https://github.com/jruere/multiprocessing-logging, which was spun out of this post: http://stackoverflow.com/questions/641420/how-should-i-log-while-using-multiprocessing-in-python.
All it resulted in was importing multiprocessing_logging and then adding
multiprocessing_logging.install_mp_handler(logger)
The suggested way to use logging in multiprocessing pools is to share nothing. Use one logger per worker process (or thread) and initialize the logger after forking. A shared logger will not work correctly with multiprocessing due to the stateful nature of the logger and race conditions that will arise between different copies of the logger in the different processes.