question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Intermittent Fatal Python error: could not acquire lock for <_io.BufferedWriter name='<stderr>'> at interpreter shutdown, possibly due to daemon threads

See original GitHub issue

SDK intermittently causing fatal errors like this:

[sentry] DEBUG: atexit: got shutdown signal
[sentry] DEBUG: atexit: shutting down client
[sentry] DEBUG: Flushing HTTP transport
[sentry] DEBUG: background worker got flush request
[sentry] DEBUG: background worker flushed
[sentry] DEBUG: Killing HTTP transport
[sentry] DEBUG: background worker got kill request
[sentry] DEBUG: Killing HTTP transport
Fatal Python error: could not acquire lock for <_io.BufferedWriter name='<stderr>'> at interpreter shutdown, possibly due to daemon threads

Thread 0x00007f4debd63700 (most recent call first):
  File "/usr/lib/python3.7/logging/__init__.py", line 1009 in flush
  File "/usr/lib/python3.7/logging/__init__.py", line 1029 in emit
  File "/usr/lib/python3.7/logging/__init__.py", line 894 in handle
  File "/usr/lib/python3.7/logging/__init__.py", line 1586 in callHandlers
  File "/usr/local/lib/python3.7/dist-packages/sentry_sdk/integrations/logging.py", line 85 in sentry_patched_callhandlers
  File "/usr/lib/python3.7/logging/__init__.py", line 1524 in handle
  File "/usr/lib/python3.7/logging/__init__.py", line 1514 in _log
  File "/usr/lib/python3.7/logging/__init__.py", line 1366 in debug
  File "/usr/local/lib/python3.7/dist-packages/sentry_sdk/worker.py", line 78 in kill
  File "/usr/local/lib/python3.7/dist-packages/sentry_sdk/transport.py", line 357 in kill
  File "/usr/local/lib/python3.7/dist-packages/sentry_sdk/transport.py", line 98 in __del__
  File "/usr/local/lib/python3.7/dist-packages/sentry_sdk/worker.py", line 118 in _target
  File "/usr/lib/python3.7/threading.py", line 870 in run
  File "/usr/local/lib/python3.7/dist-packages/sentry_sdk/integrations/threading.py", line 67 in run
  File "/usr/lib/python3.7/threading.py", line 926 in _bootstrap_inner
  File "/usr/lib/python3.7/threading.py", line 890 in _bootstrap

Current thread 0x00007f4df23a7700 (most recent call first):
  • Python 3.7.9
  • sentry-sdk==0.18.0

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:2
  • Comments:8 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
alexanderilyincommented, Oct 19, 2020

Seems like the problem is coming from sentry_sdk.init( ... debug=True ... ) param, seems that disabling debug helped. I was happening more frequently when reporting transactions/spans and I’m not currently doing that. I don’t have a reproducible case that I can share at the moment due to the intermittent error nature. Another thing that it’s used in a Python CLI app.

1reaction
alexanderilyincommented, Oct 17, 2020

Probably disabling atexit could help.

17:05:21  [sentry] DEBUG: Setting up integrations (with default = True)
17:05:21  [sentry] DEBUG: Setting up previously not enabled integration logging
17:05:21  [sentry] DEBUG: Setting up previously not enabled integration stdlib
17:05:21  [sentry] DEBUG: Setting up previously not enabled integration excepthook
17:05:21  [sentry] DEBUG: Setting up previously not enabled integration dedupe
17:05:21  [sentry] DEBUG: Setting up previously not enabled integration atexit
17:05:21  [sentry] DEBUG: Setting up previously not enabled integration modules
17:05:21  [sentry] DEBUG: Setting up previously not enabled integration argv
17:05:21  [sentry] DEBUG: Setting up previously not enabled integration threading
17:05:21  [sentry] DEBUG: Enabling integration logging
17:05:21  [sentry] DEBUG: Enabling integration stdlib
17:05:21  [sentry] DEBUG: Enabling integration excepthook
17:05:21  [sentry] DEBUG: Enabling integration dedupe
17:05:21  [sentry] DEBUG: Enabling integration atexit
17:05:21  [sentry] DEBUG: Enabling integration modules
17:05:21  [sentry] DEBUG: Enabling integration argv
17:05:21  [sentry] DEBUG: Enabling integration threading
Read more comments on GitHub >

github_iconTop Results From Across the Web

How to fix a 'fatal Python error: _enter_buffered_busy
How to fix a 'fatal Python error: _enter_buffered_busy: could not aquire lock for <_io.BufferedWriter name='<stdout>'> at interpreter shutdown' ...
Read more >
The python interpreter crashed with "_enter_buffered_busy"
... could not acquire lock for <_io.BufferedWriter name='<stderr>'> at interpreter shutdown, possibly due to daemon threads Python runtime ...
Read more >
What are possible causes of this interpreter error?
Fatal Python error : could not acquire lock for <_io.BufferedWriter name='<stdout>'> at interpreter shutdown, possibly due to daemon threads.
Read more >
Output console doesn't release stdin lock before finalization
Fatal Python error : could not acquire lock for <_io.BufferedReader name='<stdin>'> at interpreter shutdown, possibly due to daemon threads Python runtime ...
Read more >
fatal python error: could not acquire lock for <_io.bufferedwriter ...
fatal python error : could not acquire lock for <_io.bufferedwriter name='<stdout>'> at interpreter shutdown, possibly due to daemon threads.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found