Add `noListener` option to `useDb()` to help memory usage in cases where the app is calling `useDb()` on every request
See original GitHub issueDo you want to request a feature or report a bug?
I want to report a bug.
What is the current behavior?
We are using database per tenant approach, thus we make useDb
call for each http request. After some time nodejs server goes out of memory.
If the current behavior is a bug, please provide the steps to reproduce.
My package.json
file:
{
"name": "mongoose-connection-issues",
"version": "1.0.0",
"private": true,
"main": "server.js",
"dependencies": {
"mongoose": "5.11.17"
}
}
The whole code to run the server (server.js
):
const {createServer} = require('http')
const mongoose = require('mongoose')
const mongoUrl = process.env.MONGO_URL || 'mongodb://127.0.0.1/test'
const connectOptions = {
useNewUrlParser: true,
useUnifiedTopology: true
}
const useDbOptions = {
// this causes number of otherDbs to go up, relatedDbs stays the same
useCache: false
// this causes number of relatedDbs to go up in addition to otherDbs, even worse
//useCache: true
}
let connectPromise = mongoose.connect(mongoUrl, connectOptions)
async function database(tenantId) {
await connectPromise;
// the following line leaks memory
const tenantConnection = mongoose.connection.useDb(`db_${tenantId}`, useDbOptions);
console.log(
`tenantId:${tenantId}`,
`otherDbs:${tenantConnection.otherDbs.length}`,
`connection.otherDbs:${mongoose.connection.otherDbs.length}`,
`relatedDbs:${Object.keys(tenantConnection.relatedDbs).length}`
)
return tenantConnection
}
async function handleRequest(id) {
const conn = await database(id)
const res = {
id,
otherDbsLength: conn.otherDbs.length,
relatedDbsLength: Object.keys(conn.relatedDbs).length
}
await conn.close()
return res
}
const server = createServer( async function(request, response) {
const url = new URL(request.url, 'https://127.0.0.1/');
if (url.pathname.startsWith('/ping')) {
const result = await handleRequest(url.searchParams.get('id'))
response.setHeader('content-type', 'application/json')
response.write(JSON.stringify(result))
response.end()
}
})
server.listen(process.env.PORT || 3000);
The code to generate load (generate-load.sh
):
for ((i = $1; i <= $2; i++)); do
curl "http://localhost:3000/ping?id=${i}"
done
A docker-compose.yml
to start mongo:
version: "3.6"
services:
mongo:
image: mongo:bionic
container_name: test-mongo
ports:
- "27017:27017"
Steps to see the issue:
- Start mongo server via
docker-compose up
- Start Nodejs server via
npm start
- Generate some load via
./generate.sh 1 30
If you look at stdout output of server you’ll see:
tenantId:1 otherDbs:1 connection.otherDbs:1 relatedDbs:0
tenantId:2 otherDbs:1 connection.otherDbs:2 relatedDbs:0
tenantId:3 otherDbs:1 connection.otherDbs:3 relatedDbs:0
tenantId:4 otherDbs:1 connection.otherDbs:4 relatedDbs:0
tenantId:5 otherDbs:1 connection.otherDbs:5 relatedDbs:0
tenantId:6 otherDbs:1 connection.otherDbs:6 relatedDbs:0
tenantId:7 otherDbs:1 connection.otherDbs:7 relatedDbs:0
tenantId:8 otherDbs:1 connection.otherDbs:8 relatedDbs:0
tenantId:9 otherDbs:1 connection.otherDbs:9 relatedDbs:0
tenantId:10 otherDbs:1 connection.otherDbs:10 relatedDbs:0
tenantId:11 otherDbs:1 connection.otherDbs:11 relatedDbs:0
tenantId:12 otherDbs:1 connection.otherDbs:12 relatedDbs:0
(node:21422) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 connected listeners added to [NativeConnection]. Use emitter.setMaxListeners() to increase limit
tenantId:13 otherDbs:1 connection.otherDbs:13 relatedDbs:0
tenantId:14 otherDbs:1 connection.otherDbs:14 relatedDbs:0
tenantId:15 otherDbs:1 connection.otherDbs:15 relatedDbs:0
tenantId:16 otherDbs:1 connection.otherDbs:16 relatedDbs:0
tenantId:17 otherDbs:1 connection.otherDbs:17 relatedDbs:0
tenantId:18 otherDbs:1 connection.otherDbs:18 relatedDbs:0
tenantId:19 otherDbs:1 connection.otherDbs:19 relatedDbs:0
tenantId:20 otherDbs:1 connection.otherDbs:20 relatedDbs:0
tenantId:21 otherDbs:1 connection.otherDbs:21 relatedDbs:0
tenantId:22 otherDbs:1 connection.otherDbs:22 relatedDbs:0
tenantId:23 otherDbs:1 connection.otherDbs:23 relatedDbs:0
tenantId:24 otherDbs:1 connection.otherDbs:24 relatedDbs:0
tenantId:25 otherDbs:1 connection.otherDbs:25 relatedDbs:0
tenantId:26 otherDbs:1 connection.otherDbs:26 relatedDbs:0
tenantId:27 otherDbs:1 connection.otherDbs:27 relatedDbs:0
tenantId:28 otherDbs:1 connection.otherDbs:28 relatedDbs:0
tenantId:29 otherDbs:1 connection.otherDbs:29 relatedDbs:0
tenantId:30 otherDbs:1 connection.otherDbs:30 relatedDbs:0
otherDbs
collection grows over time. Looking though mongoose code I see that code pushes to that collection in useDb
, but nowhere there is a code for removing from that collection.
MaxListenersExceededWarning
also signals about memory leak. After each useDb mongoose setups listeners on created connection and as connection never gets deleted, event listeners stay for a while.
Passing useCache: true
option to useDb
makes things even worse, as relatedDbs
collection also starts to grow.
What is the expected behavior?
I’d expect that when useCache: false
, connection objects created by the useDb
are garbage collected properly.
What are the versions of Node.js, Mongoose and MongoDB you are using? Note that “latest” is not a version.
NodeJS: 14.15.1
Mongoose: 5.11.17
MongoDB: 3.6.4
- comes as a dependency of Mongoose
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:7
First of all, if you’re using
useDb()
for multi-tenant, we strongly recommend usinguseCache: true
. WithuseCache: true
, Mongoose can reuse the same connection if there’s a request for the same tenantId, rather than creating a new one every time.Without
useCache
, you’ll eventually run out of memory because the number of connections will grow with the number of requests. WithuseCache
, the number of connections will be limited by the number of tenants.Regarding models adding to memory usage, you can use the
deleteModel()
function to clean up models when you’re done with your connection, which allows models to be GC-ed. On my machine, using 32MB max memory, I can only get to about 1000 connections without cleaning up models, but I can get up to about 12000 connections if I clean up models:We’ll add a new method to free up a connection for GC, because that’s tricky to do right now due to event emitters. For now, I’d recommend managing connections manually if you expect to have an unlimited number of tenants, or use
useCache
if you have < 10k tenants or so.I took a closer look and there’s some issues with the example script. First, do not call
await conn.close()
on every request - that closes every single connection, which is what causes the “MaxListenerWarning” that you mentioned.In addition to that, we’ve added a
noListener
option that’s analogous to the MongoDB driver’snoListener
option that significantly reduces memory overhead for cases when you’re usinguseDb()
for every request. So in v5.12.0, you’ll be able to do:And:
And that should drastically reduce memory usage and significantly increase the amount of time your server can go without running out of memory 👍