What is the best way to handle an inaccessible postgres server?
See original GitHub issueI’m not sure how my app should best handle the case of an inaccessible Postgres server – I want to build it so that it can handle the case of the database server crashing and by itself recover when the database has recovered
Using pg.Client I would expect to be able to reconnect like this if the database is down on the initial connection attempt (but with a better timer mechanism of course):
db = new pg.Client(process.env.DATABASE_URL),
db.connect(function (error) {
if (error) {
setTimeout(function () {
db.connect();
}, 10000);
}
});
db.on('error', function () {});
The problem with this approach is that db.connect() attaches a lot of listeners whenever it is called - which apart from any errors the listeners themselves can cause by being invoked multiple times floods the connection with listeners until Node starts screaming about it. What would be the correct way to reconnect here?
Using pg.connect instead:
pg.connect(process.env.DATABASE_URL, function (err, client, done) {
if (err) {
// Handle the error, do a fallback or retry or something
}
});
By using pg.connect on each request to my Node app instead I thought I could make it easier to reconnect to the database while also gaining the advantage of connection pooling. My idea would be to just provide a good fallback when the database couldn’t be reached but instead I get unhandled errors thrown at me if I kill my server while running my code – which makes it a bit hard to handle the inaccessible server in a nice way:
events.js:72
throw er; // Unhandled 'error' event
^
error: terminating connection due to administrator command
at Connection.parseE (/node_modules/pg/lib/connection.js:526:11)
at Connection.parseMessage (/node_modules/pg/lib/connection.js:371:17)
at Socket.<anonymous> (/node_modules/pg/lib/connection.js:86:20)
at Socket.EventEmitter.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:746:14)
at Socket.EventEmitter.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:408:10)
at emitReadable (_stream_readable.js:404:5)
at readableAddChunk (_stream_readable.js:165:9)
at Socket.Readable.push (_stream_readable.js:127:10)
So I’m kind of wondering – what is the best practice in implementing node-postgres so that I can handle an inaccessible database without throwing any major errors and so that it reconnects again when the database becomes available?
Issue Analytics
- State:
- Created 10 years ago
- Reactions:2
- Comments:14 (3 by maintainers)
Top GitHub Comments
Hi @voxpelli - I did some work on the pool to make it more gracefully recover from the backend server going down.
How it works is whenever a client is idling in the pool if it receives an error that error will be consumed by the pool. The pool will destroy that client and remove it from the pool so when you next request a client you’ll get a “fresh” one. The pool itself then re-emits the error so you can be aware of what’s going on. Does that sound like a sane way to handle that?
As for an individual client, they’re not supposed to be reconnected. The best thing to do (and what the pool does) is when a client receives an error for any reason other than a query error, close that client and get a new one. I does take maybe 50 milliseconds to do the initial postgres handshake for a new connection so you don’t want to use a new client on every http request or in a tight loop, but there are plenty of cases where you want to keep a single client open for a long time. Definitely get a new client if the one you’re using experienced an error.
Frankly, just saying using a pool is a oversimplification, imo. E.g. I can hardly use pools, because I our DB is tenant based having thousands of databases I cannot have reference to all the time, especially not in a load balanced environment.
If you are not willing to see changes on the client error handling, I would suggest allowing for very strong docs. E.g. how to hack it with JS timeouts after
pg.connect
, attaching to the error handler with a minimum risk of memory leaks.