Extremely high number of connections on MongoDB Atlas with serverless lambda infrastructure
See original GitHub issuePrerequisites
- I have written a descriptive issue title
Mongoose version
6.3.4
Node.js version
16.15.0
MongoDB version
6.0.0
Operating system
Linux
Operating system version (i.e. 20.04, 11.3, 10)
No response
Issue
I am following the guide available at https://mongoosejs.com/docs/lambda.html to avoid multiple function calls creating new connections. In particular, I am using the following method to cache my DB connection:
const mongoose = require('mongoose');
let conn = null;
const uri = 'YOUR CONNECTION STRING HERE';
exports.connect = async function() {
if (conn == null) {
conn = mongoose.connect(uri, {
connectTimeoutMS: 10000,
maxPoolSize: 10,
serverSelectionTimeoutMS: 5000,
bufferCommands: false, // Disable mongoose buffering
}).then(() => mongoose);
// `await`ing connection after assigning to the `conn` variable
// to avoid multiple function calls creating new connections
await conn;
}
return conn;
};
However, I am seeing extremely high number of connections on MongoDB Atlas with this setup. A typical alert looks like:
The connections to your cluster(s) have exceeded 500, and is nearing the connection limit
I have upgraded my MongoDB Atlas instance to something with more capacity (1500 max connections). I did a load test of simulating a few operations through my APIs and the connections were reaching 1000+ for just a small load.
More details
- I am using Serverless Stack as the serverless framework
A typical lambda handler looks like this:
export const handler = async (event: APIGatewayProxyEventV2WithJWTAuthorizer) => {
// perform some operations
return {
statusCode: 200,
body: JSON.stringify({
message: 'Go Serverless v1.0! Your function executed successfully!',
input: event,
}),
}
}
Can someone point me in the right direction as to how I can cache the connections? Do I need to modify anything in the handler or the mongoose configs?
Issue Analytics
- State:
- Created a year ago
- Comments:7
Top GitHub Comments
Great suggestions from @jeanbmar . I agree that, if you get 20k concurrent requests, you’ll probably run out of connections. Each Lambda instance will create a new connection pool, so even if you reduce
maxPoolSize
to 1 in your example, you’ll only be able to handle 750 concurrent requests. You need some sort of queueing infrastructure to make sure you don’t go over that.@jeanbmar That’s helpful, thank you!