Pooling: read ETIMEDOUT error after idle connection in Node 14
See original GitHub issueThis is one of my first issues on a node library so I’m going to do my best to explain and show my findings but if anymore information is needed please let me know. We recently upgraded to Node 14 as Node 12 was reaching end of life in azure services. Since then we have been getting random service crashes with the following error.
Error: read ETIMEDOUT
at TLSWrap.onStreamRead (internal/stream_base_commons.js:209:20)
at TLSWrap.callbackTrampoline (internal/async_hooks.js:126:14)
I’ve looked at various other issues that people have reported and have implemented the keepAlive
option and idleTimeoutMillis
based on some findings and it did help make it more stable, however, after long periods of inactivity the crashes still occur.
For reference we have a micro-service architecture with some services barely ever used as they are for one off integrations etc etc and not part of the main app. Our services share a ‘database-connection’ package in a lerna repo that handles the pooling of the connections. The code is as follows:
const { Pool } = require('pg');
const {
POSTGRES_HOST,
POSTGRES_PORT,
POSTGRES_DATABASE,
POSTGRES_USER,
POSTGRES_PASSWORD,
POSTGRES_MAX_CONNECTIONS,
POSTGRES_CONNECTION_IDLE_TIMEOUT,
POSTGRES_APPLICATION_NAME,
} = process.env;
const poolConfig = {
host: POSTGRES_HOST,
port: POSTGRES_PORT,
database: POSTGRES_DATABASE,
user: POSTGRES_USER,
password: POSTGRES_PASSWORD,
ssl: true,
keepAlive: true,
idleTimeoutMillis: 3000,
};
if (
POSTGRES_MAX_CONNECTIONS &&
!isNaN(Number.parseInt(POSTGRES_MAX_CONNECTIONS, 10))
) {
poolConfig.max = Number.parseInt(POSTGRES_MAX_CONNECTIONS, 10);
}
if (
POSTGRES_CONNECTION_IDLE_TIMEOUT &&
!isNaN(Number.parseInt(POSTGRES_CONNECTION_IDLE_TIMEOUT, 10))
) {
poolConfig.idleTimeoutMillis = Number.parseInt(
POSTGRES_CONNECTION_IDLE_TIMEOUT,
10
);
}
if (POSTGRES_APPLICATION_NAME) {
poolConfig.application_name = POSTGRES_APPLICATION_NAME;
}
let poolInstance = null;
const initializePool = () => {
poolInstance = new Pool(poolConfig);
};
const pool = () => poolInstance;
module.exports = {
initializePool,
pool,
};
We then initialize our server on startup like
databaseConnection.initializePool();
const databaseConnectionPool = databaseConnection.pool();
databaseConnectionPool
.connect()
.then(() => (parseBoolean(process.env.RUN_MIGRATIONS)
? databaseMigrationUtility.executeMigrations(
databaseConnection,
'migrations',
'../migration-files',
)
: Promise.resolve()))
.then(() => {
server.listen(process.env.PORT || 5010);
monitoring.trackEvent('Started Service API');
})
.catch((error) => {
logger.error(error);
});
Queries to the database are made following the single query pattern from the docs.
const databaseConnection = require('database-connection'); // This is the index file of the pool connection above
const getExampleDataFromDatabase = (id1, id2) => {
const query = `
SELECT id1
FROM TABLE_1
WHERE id1 = $1
AND id2 != $2`;
return databaseConnection
.pool()
.query(query, [id1, id2])
.then((result) => result.rows.map((row) => row.id));
After long periods of inactivity the service will crash with the error above. Is this expected behavior? Should we be handing connections differently? Let me know if you need anymore information
Issue Analytics
- State:
- Created a year ago
- Comments:8
Top GitHub Comments
yes
no