[BUG] Setting the maxmemory flag on Redis, causes the platform pods to crash loop once Redis reaches max memory
See original GitHub issueDescription
Setting the maxmemory flag on redis, causes the platform pods to crash loop once redis reaches max memory
ReplyError: OOM command not allowed when used memory > 'maxmemory'.
at parseError (/opt/opencti/build/node_modules/redis-parser/lib/parser.js:179:12)
at parseType (/opt/opencti/build/node_modules/redis-parser/lib/parser.js:302:14)
Environment
- OS (where OpenCTI server runs): Cent OS Stream 8
- OpenCTI version: 5.4.0
- OpenCTI client: frontend
- Other environment details: Kubernetes Deployment
Reproducible Steps
Steps to create the smallest reproducible scenario:
- Start Redis server with
redis-server --maxmemory 6442450944 --maxmemory-policy volatile-lru - Wait for memory usage to balloon in the rdis container
- Once it levels of at ~6GB (Or what you set above ^), the platform pods will start to crash after some time
Expected Output
Better management ofd Redis connection handing. Ive got kinda a catch 22 here:
On one hand, If I dont set the maxmemeory Kubernetes will OOMKill Redis (This its self is fine), but when that happens OpenCTI Stalls, CPU usage drops to near 0 and the WebUI locks up
On the other hand If I cap Redis, Kubernetes wont OOMKill it, but the OpenCTI platform pods will throw the error in this issue and crash.
So since clustering isnt supported, this makes it where the platform is requiring manual interaction to keep running every 12-36 hours
Actual Output
ReplyError: OOM command not allowed when used memory > 'maxmemory'.
at parseError (/opt/opencti/build/node_modules/redis-parser/lib/parser.js:179:12)
at parseType (/opt/opencti/build/node_modules/redis-parser/lib/parser.js:302:14)
Additional information
Please let me know if I can help testing here, as this is a big stability issue at the moment
Screenshots (optional)
Issue Analytics
- State:
- Created 10 months ago
- Comments:12 (4 by maintainers)

Top Related StackOverflow Question
We need to take a look about what is currently taking space in your redis. I can recommand to use redisinsight that will provide you a nice UI to explore the current redis data and try to find that keys or stream are currently taking so much space.
It does not as of now