Django dockerized Channels with Daphne High Cpu Usage
See original GitHub issueHi there, Anyone facing this issue? i deploy django with channels in docker. But problem is after few minutes. Cpu is on high usage. Here is logs:
xing-auto-services-django | disconnecting channel_name: specific.AxyORdae!CLvVjxncUcOa
xing-auto-services-django | disconnecting group_name: task-cdaa5fb1-9356-44e4-b717-1b11139bf956
xing-auto-services-django | disconnecting code: 1001
xing-auto-services-django | disconnecting channel_name: specific.AxyORdae!kCOFZNhtNdkw
xing-auto-services-django | disconnecting group_name: task-cdaa5fb1-9356-44e4-b717-1b11139bf956
xing-auto-services-django | disconnecting code: 1001
xing-auto-services-django | disconnecting channel_name: specific.AxyORdae!WOrMSfcopVba
xing-auto-services-django | disconnecting group_name: task-827ecbda-c6b9-4727-aa00-f1ea314c2c77
xing-auto-services-django | disconnecting code: 1006
xing-auto-services-django | disconnecting channel_name: specific.AxyORdae!GMJPYqfoweEH
xing-auto-services-django | disconnecting group_name: task-827ecbda-c6b9-4727-aa00-f1ea314c2c77
xing-auto-services-django | disconnecting code: 1006
xing-auto-services-django | disconnecting channel_name: specific.AxyORdae!QBqoTrHtWjUj
xing-auto-services-django | disconnecting group_name: task-827ecbda-c6b9-4727-aa00-f1ea314c2c77
xing-auto-services-django | Application instance <Task pending coro=<SessionMiddlewareInstance.__call__() running at /usr/local/lib/python3.7/site-packages/channels/sessions.py:183> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7feb980b4d90>()]>> for connection <WebSocketProtocol client=['182.186.15.128', 62602] path=b'/ws/task/f9f6b948-e9f8-426e-b591-8dc4ea61cf3e/'> took too long to shut down and was killed.
consumers.py
from channels.exceptions import DenyConnection, StopConsumer
from channels.generic.websocket import AsyncJsonWebsocketConsumer
class TaskConsumer(AsyncJsonWebsocketConsumer):
async def _check_auth(self):
user = self.scope.get('user', )
if not user.is_superuser:
raise DenyConnection
async def disconnect(self, code):
print('disconnecting code:', code)
print('disconnecting channel_name:', self.channel_name)
print('disconnecting group_name:', self.group_name)
self.channel_layer.group_discard(
self.group_name, self.channel_name
)
raise StopConsumer()
async def connect(self):
await self._check_auth()
task_id = self.scope.get('url_route').get('kwargs').get('task_id')
self.group_name = 'task-{}'.format(task_id)
await self.channel_layer.group_add(
self.group_name,
self.channel_name
)
await self.accept()
async def send_event(self, event):
del event['type']
await self.send_json(event)
Docker-compose:
version: '3'
services:
rq_default_worker:
build: .
env_file:
- web.env
command: ["./scripts/wait-for-it.sh", "web:8000", "--", "bash", "./scripts/run_rq_default_worker.sh"]
volumes:
- .:/code
links:
- db
- redis
depends_on:
- web
rq_high_worker:
build: .
env_file:
- web.env
command: ["./scripts/wait-for-it.sh", "web:8000", "--", "bash", "./scripts/run_rq_high_worker.sh"]
volumes:
- .:/code
links:
- db
- redis
depends_on:
- web
rq_low_worker:
build: .
env_file:
- web.env
command: ["./scripts/wait-for-it.sh", "web:8000", "--", "bash", "./scripts/run_rq_low_worker.sh"]
volumes:
- .:/code
links:
- db
- redis
depends_on:
- web
rq_default_scheduler:
build: .
env_file:
- web.env
command: ["./scripts/wait-for-it.sh", "web:8000", "--", "bash", "./scripts/run_default_rq_scheduler.sh"]
volumes:
- .:/code
links:
- db
- redis
depends_on:
- web
redis:
image: "redis:latest"
ports:
- "6379:6379"
volumes:
- ./docker/redis/data:/data
db:
image: postgres
restart: always
env_file:
- db.env
volumes:
- ./docker/postgres/data:/var/lib/postgresql/data/pgdata
ports:
- "5432:5432"
nginx:
restart: always
image: nginx:alpine
volumes:
- ./docker/nginx/conf.d:/etc/nginx/conf.d
- ./logs:/code/logs/
- ./static:/code/static/
- ./media:/code/media/
ports:
- "10080:80"
- "10443:443"
links:
- web
web:
container_name: "xing-auto-services-django"
env_file:
- web.env
build: .
command: ["./scripts/wait-for-it.sh", "db:5432", "--", "bash", "./scripts/run_server.sh"]
volumes:
- .:/code
- ./static:/code/static/
- ./media:/code/media/
ports:
- "8000:8000"
links:
- db
- redis
restart: always
depends_on:
- db
- redis
- Your OS and runtime environment, and browser if applicable
django-rq-scheduler==1.1.3
psycopg2==2.8.3
aioredis==1.2.0
amqp==2.5.0
anyjson==0.3.3
asgiref==3.1.4
asn1crypto==0.24.0
async-timeout==3.0.1
atomicwrites==1.3.0
attrs==19.1.0
autobahn==19.7.1
Automat==0.7.0
Babel==2.7.0
certifi==2019.9.11
cffi==1.12.3
channels==2.3.0
channels-redis==2.4.0
chardet==3.0.4
Click==7.0
constantly==15.1.0
cryptography==2.7
daphne==2.2.1
dateparser==0.7.2
dateutils==0.6.6
Django==2.2.3
django-rq==2.1.0
django-task==1.4.3
- What you expected to happen vs. what actually happened
-
- first few minutes it keep sending messages just fine but after few minutes or hours. Cpu jump to 400% because of redis.
- How you’re running Channels (runserver? daphne/runworker? Nginx/Apache in front?)
-
- daphne with nginx
here nginx conf:
upstream web_server {
server web:8000;
}
server {
listen 80;
server_name 127.0.0.1;
charset utf-8;
client_max_body_size 20M;
location /static/ {
alias /code/static/;
}
location /media/ {
alias /code/media/;
}
location / {
proxy_pass http://web_server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
}
i been trying to fix this issue from one week but problem is i won’t able to figure out what’s exactly issue here…
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Daphne High CPU Usage with Channel 2 and Redis Layer
In my case, some error in tasks.py(celery) makes daphne cpu 100% usage. The error log can be checked in asgi log file.
Read more >How to debug Daphne CPU 100% issue - Google Groups
Just use the Daphne as the websocket server with redis as the Channel layer. Now I do the following steps: top command show...
Read more >Daphne High CPU Usage with Channel 2 and Redis Layer ...
[Solved]-Daphne High CPU Usage with Channel 2 and Redis Layer-django ... In my case, some error in tasks.py(celery) makes daphne cpu 100% usage....
Read more >Django Channels in production with Docker - YouTube
The example is here:https://github.com/mopitz199/DjangoChannels.
Read more >Changelog — funkwhale 1.2.9 documentation
Changed default behaviour of channel entries to use channel artwork if no ... is also faster and more stable under higher workloads compared...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hi, Thanks for your all help. Everything is working perfectly. One Mistake i did was not close port of redis from docker. Redis port was open and may be someone else was also connected to this. They might be using it which somehow was killing CPU. I changed redis
port
toexpose
so only world of docker will be able to access redis and then i never had issue like that. This might be helpful for someone who is making same mistake.Thanks for the follow-up @Ammadkhalid. Glad you found a fix.