question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error on Restarting Celery

See original GitHub issue

After upgrading from kombu==4.2.1 to kombu==4.2.2, I’m now seeing this error while doing warm celery shutdowns. Unacked messages don’t end up being restored and tasks are lost.

Downgrading to kombu==4.2.1 fixes the issue for me.

worker: Warm shutdown (MainProcess)
[2018-12-19 13:16:30,578: WARNING/MainProcess] Restoring 1 unacknowledged message(s)
[2018-12-19 13:16:30,578: WARNING/MainProcess] Traceback (most recent call last):
[2018-12-19 13:16:30,578: WARNING/MainProcess] File "/usr/local/lib/python2.7/multiprocessing/util.py", line 274, in _run_finalizers
[2018-12-19 13:16:30,578: WARNING/MainProcess] finalizer()
[2018-12-19 13:16:30,579: WARNING/MainProcess] File "/usr/local/lib/python2.7/multiprocessing/util.py", line 207, in __call__
[2018-12-19 13:16:30,579: WARNING/MainProcess] res = self._callback(*self._args, **self._kwargs)
[2018-12-19 13:16:30,579: WARNING/MainProcess] File "/usr/local/lib/python2.7/site-packages/kombu/transport/virtual/base.py", line 287, in restore_unacked_once
[2018-12-19 13:16:30,579: WARNING/MainProcess] unrestored = self.restore_unacked()
[2018-12-19 13:16:30,579: WARNING/MainProcess] File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 164, in restore_unacked
[2018-12-19 13:16:30,580: WARNING/MainProcess] with self.channel.conn_or_acquire(client) as client:
[2018-12-19 13:16:30,580: WARNING/MainProcess] File "/usr/local/lib/python2.7/contextlib.py", line 17, in __enter__
[2018-12-19 13:16:30,580: WARNING/MainProcess] return self.gen.next()
[2018-12-19 13:16:30,580: WARNING/MainProcess] File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 978, in conn_or_acquire
[2018-12-19 13:16:30,581: WARNING/MainProcess] yield self._create_client()
[2018-12-19 13:16:30,582: WARNING/MainProcess] File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 959, in _create_client
[2018-12-19 13:16:30,582: WARNING/MainProcess] return self.Client(connection_pool=self.pool)
[2018-12-19 13:16:30,582: WARNING/MainProcess] File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 983, in pool
[2018-12-19 13:16:30,583: WARNING/MainProcess] self._pool = self._get_pool()
[2018-12-19 13:16:30,583: WARNING/MainProcess] File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 962, in _get_pool
[2018-12-19 13:16:30,583: WARNING/MainProcess] params = self._connparams(asynchronous=asynchronous)
[2018-12-19 13:16:30,583: WARNING/MainProcess] File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 888, in _connparams
[2018-12-19 13:16:30,584: WARNING/MainProcess] 'host': conninfo.hostname or '127.0.0.1',
[2018-12-19 13:16:30,584: WARNING/MainProcess] AttributeError: 'NoneType' object has no attribute 'hostname'

Package versions:

kombu==4.2.2
celery==4.2.1
redis==2.10.6
billiard==3.5.0.5

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:8

github_iconTop GitHub Comments

1reaction
ntraviscommented, Dec 21, 2018

see #966 (#963 is basically the same issue as yours that 4.2.2 was master, not just 4.2 branch)

0reactions
vdmit11commented, Apr 17, 2019

@vdmit11 I’m curious if you’ve seen eb6e4c8

It seems like this reverts the original change, and fixes that issue with a dependency update, meanwhile moving channel._on_connection_disconnect(self) back to where it was originally.

Are people still experiencing this, because it seems to be resolved

I didn’t try it yet, but I think that should fix this specific bug with shutdown.

But as I mentioned earlier, to me that looks as a bad fix, because in this case the _on_connection_disconnect() call just has no effect. So the bug is temporary masked by this fact that the function is called at a wrong time (when the socket is already closed), and it just does nothing and returns immediately, not letting the buggy code to be executed. So in future, this bug may appear again.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Celery restarting for no reason · Issue #6502 - GitHub
systemctl restart celery hangs until failed, but! systemctl status celery is in START state, but again. Restarts every time.
Read more >
Restart celery if celery worker is down on windows
I wanted to know if there is a way to restart celery worker if celery worker is down due to some error or...
Read more >
Solved celery beat keeps restarting - Cloudron Forum
I just wanted to check if there is an indicator or error message when celery-beat starts the first time. I downloaded the "full...
Read more >
Celery worker exited prematurely on restart using systemd
I receive this error in less than 10 seconds of running the restart command. What am I doing wrong? Using celery version: 5.2.2...
Read more >
Celery keeps crashing - Developers - CommCare Forum
Restarting it isn't doing anything even though the restart command runs successfully. I think it was just updating in the background.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found