Celery Tasks Fail Randomly with redis.exceptions.ResponseError: wrong number of arguments for 'subscribe' command
See original GitHub issueDescribe the bug Facing this issue intermittently where Celery gives the following error on scheduled run. I believe this is happening because of some race condition due to asyncio. We have used single pod solution only, even with that configuration this issue pops up randomly
{"stackTrace": "Traceback (most recent call last):\n File \"/code/ops/tasks/anomalyDetectionTasks.py\", line 85, in
anomalyDetectionJob\n result = _detectionJobs.get()\n File \"/opt/venv/lib/python3.7/site-packages/celery/result.py\", line 680, in get\n on_interval=on_interval,\n File \"/opt/venv/lib/python3.7/site-packages/celery/result.py\", line 799, in
join_native\n on_message, on_interval):\n File \"/opt/venv/lib/python3.7/site-packages/celery/backends/asynchronous.py\",
line 150, in iter_native\n for _ in self._wait_for_pending(result, no_ack=no_ack, **kwargs):\n File
\"/opt/venv/lib/python3.7/site-packages/celery/backends/asynchronous.py\", line 267, in _wait_for_pending\n
on_interval=on_interval):\n File \"/opt/venv/lib/python3.7/site-packages/celery/backends/asynchronous.py\", line 54, in
drain_events_until\n yield self.wait_for(p, wait, timeout=interval)\n File \"/opt/venv/lib/python3.7/site-
packages/celery/backends/asynchronous.py\", line 63, in wait_for\n wait(timeout=timeout)\n File
\"/opt/venv/lib/python3.7/site-packages/celery/backends/redis.py\", line 152, in drain_events\n message =
self._pubsub.get_message(timeout=timeout)\n File \"/opt/venv/lib/python3.7/site-packages/redis/client.py\", line 3617, in
get_message\n response = self.parse_response(block=False, timeout=timeout)\n File \"/opt/venv/lib/python3.7/site-
packages/redis/client.py\", line 3505, in parse_response\n response = self._execute(conn, conn.read_response)\n File
\"/opt/venv/lib/python3.7/site-packages/redis/client.py\", line 3479, in _execute\n return command(*args, **kwargs)\n File
\"/opt/venv/lib/python3.7/site-packages/redis/connection.py\", line 756, in read_response\n raise
response\nredis.exceptions.ResponseError: wrong number of arguments for 'subscribe' command\n", "message": "wrong
number of arguments for 'subscribe' command"}
To Reproduce Steps to reproduce the behavior:
- Create an anomaly definition
- Schedule it to run at specific time
- Few times schedule might succeed where as few other times you might see the above error
Expected behavior Is there any work around that we can use to avoid this issue?, please help
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
redis.exceptions.ResponseError: wrong number of arguments ...
Celery Tasks Fail Randomly with redis.exceptions.ResponseError: wrong number of arguments for 'subscribe' command cuebook/CueObserve#147.
Read more >redis.exceptions.ResponseError: wrong number of arguments ...
Implement celery in my main.py application. from flask import Flask from celery import Celery flask_app = Flask(__name__) flask_app.config[ ...
Read more >redis.exceptions.ResponseError: wrong number of arguments ...
redis.exceptions.ResponseError: wrong number of arguments for 'subscribe' command.
Read more >Automatically Retrying Failed Celery Tasks - TestDriven.io
After reading, you should be able to: Retry a failed Celery task with both the retry method and a decorator argument; Use exponential...
Read more >Resolve "OOM command not allowed" for ElastiCache
How do I resolve this? Short description. An OOM error occurs when an ElastiCache for Redis cluster can't free any additional memory.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@ankitkpandey can’t this be exception handled and retried, something like Task Retry Decorator
@ankitkpandey Thanks for sharing this, so I assume this is happening because Celery doesn’t support Asyncio yet, and the call is being made in async fashion.
On a separate note, is there any workaround you can provide around this?