7.18.2: "Connection terminated unexpectedly" when using client.query with a pool when pool has been idle for 10 minutes (running in AWS Lambda)
See original GitHub issueCode is running in a lambda. When we run the function below it runs fine. But if we run it and then wait 10 minutes then run it again, we get an error every time. We are querying Aurora Postgres on AWS.
Code:
const {Pool} = require('pg');
// Global Connection, can be re-used!!
const pgPool = new Pool({
user: process.env.PG_USER,
host: process.env.PG_HOST,
database: process.env.PG_DATABASE,
password: process.env.PG_PASSWORD,
port: process.env.PG_PORT,
max: process.env.MAX_CLIENTS
});
pgPool.on('error', (err, client) => {
console.error('Unexpected error in Postgress connection pool', err);
});
async function performQuery(event) {
let queryString = null;
let args = null;
try {
// some setup code here...
const client = await pgPool.connect();
try {
const res = await client.query(queryString, args);
return res.rows;
} finally {
client.release();
}
} catch (err) {
console.error('Problem executing export query:');
console.error(err); // <-- this line is in the log below
throw err;
}
}
This is what I see in the cloudwatch logs:
{
"errorType": "Error",
"errorMessage": "Connection terminated unexpectedly",
"stack": [
"Error: Connection terminated unexpectedly",
" at Connection.<anonymous> (/var/task/node_modules/pg/lib/client.js:255:9)",
" at Object.onceWrapper (events.js:312:28)",
" at Connection.emit (events.js:223:5)",
" at Connection.EventEmitter.emit (domain.js:475:20)",
" at Socket.<anonymous> (/var/task/node_modules/pg/lib/connection.js:78:10)",
" at Socket.emit (events.js:223:5)",
" at Socket.EventEmitter.emit (domain.js:475:20)",
" at TCP.<anonymous> (net.js:664:12)"
]
}
I’ve tried a few variations on this but the constant is the 10 minutes and the use of the Pool. To me this code is almost identical to the code in https://node-postgres.com/features/pooling.
So far it looks like the problem has been solved by using a Client instead:
const client = new Client({
user: process.env.PG_USER,
host: process.env.PG_HOST,
database: process.env.PG_DATABASE,
password: process.env.PG_PASSWORD,
port: process.env.PG_PORT
});
await client.connect();
const res = await client.query(queryString, args);
await client.end();
return res.rows;
Issue Analytics
- State:
- Created 4 years ago
- Reactions:15
- Comments:56 (9 by maintainers)
Top Results From Across the Web
node.js - "connection terminated unexpectedly" error with ...
These functions have been using the Node 8 runtime but AWS sent out an end-of-life notice saying that functions should be upgraded to...
Read more >update-user-pool — AWS CLI 1.27.37 Command Reference
Updates the specified user pool with the specified attributes. You can get a list of the current user pool settings using DescribeUserPool ....
Read more >Hazelcast IMDG 3.12.1 Reference Manual
It means if you have Hazelcast IMDG Enterprise HD, you can use those features ... When you run it, you see the client...
Read more >Troubleshooting AWS Lambda + AWS Aurora Serverless ...
At Guild Education, many of our backend services run on Node.js, AWS Lambda, and AWS Aurora Serverless Postgres databases.
Read more >Hazelcast IMDG Reference Manual 3.11.1 - UserManual.wiki
AWS Cloud Discovery. ... 103 Required configuration changes when using NATIVE . ... Starting with Hazelcast 3.7, the class GroupProperties.java has been ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
still a thing
People praise AWS Lambda, without knowing that it is the only hosting environment in the world that implements aggressive connection-recycle policy.
Nobody it seems is greedy enough as to inconvenience their clients so bad, to decide to drop live connections instead of what everyone else is doing - extending the IO capacity. It’s just dumb corporate greed, backed up by self-assured market dominance, to maximize profit by reducing the cost, without increasing the capacity.
That’s why issues like this one keep polluting the Internet. That’s why I do not use AWS Lambda.