Likely memory leak under high load in 3.1.0+
See original GitHub issueDescribe the bug When using the docker image of Enketo Express, we can notice the RAM usage is increasing constantly (with load). From a fresh start, RAM usage is about 600-800 MB but after few days it becomes several GB. Notes: It depends on number of requests received
To Reproduce
Use a load tester such as siege or locust and hit the root url. http://enketo-express/.
Set it to hit enketo with 3-4 requests/s. After 24 hours, the RAM used by EE docker container should be around 8 GB, after 3 days ~24 GB.
Expected behavior The RAM usage should stay below a reasonable amount (e.g 2GB max).
Screenshots None, but I can re-run my tests to get ones if needed
Browser and OS (please complete the following information):
- Docker version 20.10.14, build a224086
- Ubuntu 20.04 LTS
Additional context Enketo-Express docker container is behind NGINX which serves as the reverse proxy.
Issue Analytics
- State:
- Created a year ago
- Comments:8 (8 by maintainers)

Top Related StackOverflow Question
Hello @lognaturel,
I finally tested
2.2.0,2.8.1,3.0.4,3.0.5and again3.1.0(just to be sure). The tests were exactly the same. I’ve usedlocustto hit/with about 3 requests/s for about 3 hours.For all the versions except the latter, memory usage stayed around 800 MB. Using
3.1.0, it increased up to 2 GB. So it seems that3.1.0and up only are affected.While digging a little bit in the diff of
3.1.0, theasync contextcaught my attention. I think the package used for async context:express-cls-hookedis involved somehow.Please have a look at this comment on GitHub repository of the package.
Moreover,
express-cls-hookedusescls-hooked. Some user claimed that library had a memory leak and pushed a PR to fix it.Hello @lognaturel,
Actually, I’ve seen this with
3.1.0so I decided to test with the latest release available just to be sure it was not related to this particular version. So I guess it is not a new behaviour.Unfortunately, I haven’t. I’ve discovered this issue during a load test (starting two weeks ago) to find bottlenecks in our setup when we are facing such a load. In my tests, I used Enketo root because it was easier to setup (i.e. no auth) and see how nodejs would behave, but in real case, I would only use the root for health check monitoring (so even if we can have 3-4 requests/s on API endpoints, root should receive such traffic). I can retry my tests with another
GETAPI endpoint instead to validate it’s not only happening from/.I did not. My configuration is exactly the same. To be sure, my last load test was only hammering enketo to avoid any side effect with other containers.