question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

reentrant error when start rubrix in docker

See original GitHub issue

Hello again. I’m trying to start rubrix in k8s, and at start in container i got error:

--- Logging error ---
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1029, in emit
    self.flush()
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1009, in flush
    self.stream.flush()
RuntimeError: reentrant call inside <_io.BufferedWriter name='<stderr>'>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1029, in emit
    self.flush()
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1009, in flush
    self.stream.flush()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 242, in handle_chld
    self.reap_workers()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 533, in reap_workers
    os.WTERMSIG(status)
  File "/usr/local/lib/python3.7/site-packages/gunicorn/glogging.py", line 261, in warning
    self.error_log.warning(msg, *args, **kwargs)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1390, in warning
    self._log(WARNING, msg, args, **kwargs)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1514, in _log
    self.handle(record)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1524, in handle
    self.callHandlers(record)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1586, in callHandlers
    hdlr.handle(record)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 894, in handle
    self.emit(record)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1033, in emit
    self.handleError(record)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 956, in handleError
    traceback.print_stack(frame, file=sys.stderr)
  File "/usr/local/lib/python3.7/traceback.py", line 190, in print_stack
    print_list(extract_stack(f, limit=limit), file=file)
  File "/usr/local/lib/python3.7/traceback.py", line 25, in print_list
    print(item, file=file, end="")
RuntimeError: reentrant call inside <_io.BufferedWriter name='<stderr>'>
Call stack:
  File "/usr/local/bin/gunicorn", line 8, in <module>
    sys.exit(run())
  File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 67, in run
    WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 231, in run
    super().run()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 72, in run
    Arbiter(self).run()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 211, in run
    self.manage_workers()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 551, in manage_workers
    self.spawn_workers()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 622, in spawn_workers
    self.spawn_worker()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 573, in spawn_worker
    pid = os.fork()
  File "/usr/local/lib/python3.7/logging/__init__.py", line 221, in _releaseLock
    def _releaseLock():
  File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 242, in handle_chld
    self.reap_workers()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 533, in reap_workers
    os.WTERMSIG(status)
  File "/usr/local/lib/python3.7/site-packages/gunicorn/glogging.py", line 261, in warning
    self.error_log.warning(msg, *args, **kwargs)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1390, in warning
    self._log(WARNING, msg, args, **kwargs)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1514, in _log
    self.handle(record)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1524, in handle
    self.callHandlers(record)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1586, in callHandlers
    hdlr.handle(record)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 894, in handle
    self.emit(record)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 1025, in emit
    msg = self.format(record)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 869, in format
    return fmt.format(record)
  File "/usr/local/lib/python3.7/logging/__init__.py", line 608, in format
    record.message = record.getMessage()
  File "/usr/local/lib/python3.7/logging/__init__.py", line 367, in getMessage
    msg = str(self.msg)
  File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 242, in handle_chld
    self.reap_workers()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 533, in reap_workers
    os.WTERMSIG(status)
  File "/usr/local/lib/python3.7/site-packages/gunicorn/glogging.py", line 261, in warning
    self.error_log.warning(msg, *args, **kwargs)
Message: 'Worker with pid %s was terminated due to signal %s'
Arguments: (965651, 9)

Before this, a lot of worker trying to start and terminated due to signal 9.

  • OS: Ubuntu 20.04.2 LTS, Linux version 5.4.0-96-generic (buildd@lgw01-amd64-051) (gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04))
  • Rubrix Version : 0.14.0
  • Docker Image: master
  • CPU info: Thread(s) per core: 2 Core(s) per socket: 10 Socket(s): 2
  • k8s pod limits: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 500m memory: 1Gi

I found some issues at gunicorn: https://github.com/benoitc/gunicorn/issues/2564 But it seems not helpful. Increase pod resources not help at all.

I will be grateful if you can help. If you need any information about my environment, i can provide it.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
frascuchoncommented, May 24, 2022

I’ll close this issue. Please, feel free you reopen it if you consider it.

0reactions
frascuchoncommented, May 20, 2022

Hi @zelanastasia

You could manage the number of workers at the Kubernetes level. You can increase the number of replicas in your deployment manifest and expose your service using a LoadBalancer ingress https://kubernetes.io/docs/concepts/services-networking/service/

Read more comments on GitHub >

github_iconTop Results From Across the Web

reentrant error when start rubrix in docker · Issue #1509 - GitHub
Hello again. I'm trying to start rubrix in k8s, and at start in container i got error: --- Logging error --- Traceback (most...
Read more >
How to explain the reentrant RuntimeError caused by printing ...
Simply execute caller.py and then press Ctrl+C , the program will raise RuntimeError: reentrant call inside <_io.
Read more >
Diff - public/gem5 - Google Git
This fixes warning/error received when "<experiment/optional>" is used when ... + services: docker + env: DOCKER=debian:stretch PYTHON=3.5 CPP=14 GCC=6 ...
Read more >
sphinx-all command man page - python3-sphinx | ManKier
These sections cover the basics of getting started with Sphinx, including creating ... docker run --rm -v /path/to/document:/docs sphinxdoc/sphinx make html.
Read more >
Untitled
Freeview programme guide not working, Autosomico dominante arbol genealogico, ... Undenied by maya banks, Karaoke start of something new.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found