Closing session in the server leads to 100% cpu usage and make server hang
See original GitHub issueServer:
import asyncio
import sockjs
from aiohttp import web
loop = asyncio.get_event_loop()
app = web.Application(loop=loop)
from sockjs import MSG_OPEN, MSG_MESSAGE, MSG_CLOSE, MSG_CLOSED
def handler(msg, session):
tp = msg.tp
print("tp", tp, msg, session)
if tp == MSG_MESSAGE: # onmessage
print("session {} Got message: {}".format(session.id, msg.data))
session.close()
pass
elif tp == MSG_OPEN: # onopen
print("session {} Open".format(session.id))
pass
elif tp == MSG_CLOSE or tp == MSG_CLOSED: # onclose
print("session {} {}".format(session.id, "Closing" if tp == MSG_CLOSE else "Closed"))
pass
sockjs.add_endpoint(
app,
handler,
name='skjs',
prefix="/chat_sockjs",
sockjs_cdn='https://cdn.jsdelivr.net/npm/sockjs-client@1/dist/sockjs.min.js',
)
if __name__ == '__main__':
web.run_app(app, host='0.0.0.0', port=9321)
Client:
<!doctype html>
<html>
<head>
<meta charset='utf-8'>
</head>
<body>
<div id="connmsg" style="display: none">Connected!</div>
<pre id="last_delivered"></pre>
<textarea id="sender" rows=4 style='width:100%'></textarea>
<button onclick='return send()'>Submit</button>
<script src='https://cdn.jsdelivr.net/npm/sockjs-client@1/dist/sockjs.min.js'></script>
<script>
var SOCKJS_URL = 'http://localhost:9321/chat_sockjs';
var sock = new SockJS(SOCKJS_URL, null, {transports: ['websocket']});
sock.onopen = function () {
document.getElementById("connmsg").style.display = 'block';
};
sock.onmessage = function (msg) {
var node = document.createTextNode(msg.data);
var el = document.getElementById("last_delivered");
el.innerHTML = '';
el.appendChild(node);
};
sock.onclose = function () {
document.getElementById("connmsg").style.display = 'none';
};
function send() {
var el = document.getElementById("sender");
sock.send(el.value);
el.value = '';
return false;
}
</script>
</body>
<div>
You can reproduce the issue with the above codes. Make server run on port 9321 and open the html file and send something. You could easily see cpu usage soaring up to 100% and server is unresponsive to any message in further.
In some cases, you may want server to close session (e.g. when it receives a frame of unrecognized format). However if you do so, something goes wrong in the internal code of sockjs, it gets trapped to an infinite loop and server hangs. This is surely unacceptable in any asynchronous server settings.
I followed the code flow with pdb and found out:
- When underlying websocket connection is asked to be closed, the socket state turns into
MsgType.Closing. - Then,
ws.receive()is made to always returnClosingMessage. This happens insideaiohttpcode base. - As a result,
ws.receive()in coroutineclient()gets a closing message. But because nonthing handles the case, it reachesws.receives()once again without breaking the loop or awaiting event, and the same thing happens again and again. Take a look atsockjs/websocket.py:35-61.
I guess the easiest way to fix is just adding handling case for MsgType.closing condition. But, I couldn’t work it out as I don’t understand the code base currently. But, the thing is, until it gets done right, sockjs server is almost unusable (or maybe I should stick to xhr-streaming). I’m willing to do PR for this issue if you could help me 😃
Issue Analytics
- State:
- Created 6 years ago
- Reactions:1
- Comments:8 (8 by maintainers)

Top Related StackOverflow Question
After looking at changes of websocket code base of aiohttp, I found a clue where this bug was introduced.
ClosingState was added to aiohttp ws, when they introduced the ability to concurrently close websocket at the server side, which was released later in 1.3.0 (2017-02-08). https://github.com/aio-libs/aiohttp/commit/ae38c4ac7647d4b904a949088f44ed539553a26eIt seems after introducing
closingstate which is used when websocket is asked to be closed at the server side, every call tows.receive()is forced to returnCLOSING_MESSAGEindicating just the server is shutting down the stream. We get the broken implementation as a side effect, though 😦Finally fixed by #194 I hope