[Pool] This socket has been ended by the other party
See original GitHub issueI get this error every once in a while. Not sure why. Can’t replicate (yet). I’m using the latest v1.1.1.
{ Error: This socket has been ended by the other party
at TLSSocket.writeAfterFIN [as write] (net.js:285:12)
at PoolConnection.connection.write (node_modules/mysql2/lib/connection.js:272:20)
at PoolConnection.Connection.writePacket (node_modules/mysql2/lib/connection.js:227:8)
at Execute.start (node_modules/mysql2/lib/commands/execute.js:50:14)
at Execute.Command.execute (node_modules/mysql2/lib/commands/command.js:38:20)
at PoolConnection.Connection.handlePacket (node_modules/mysql2/lib/connection.js:360:28)
at PoolConnection.Connection.addCommand (node_modules/mysql2/lib/connection.js:378:10)
at PoolConnection.execute (node_modules/mysql2/lib/connection.js:563:8)
at node_modules/mysql2/lib/pool.js:159:17
at node_modules/mysql2/lib/pool.js:36:14
at _combinedTickCallback (internal/process/next_tick.js:67:7)
at process._tickDomainCallback (internal/process/next_tick.js:122:9) code: 'EPIPE', fatal: true }
Issue Analytics
- State:
- Created 7 years ago
- Reactions:8
- Comments:35 (9 by maintainers)
Top Results From Across the Web
node-postgres: [error] This socket has been ended by the ...
Try declaring the pool object inside the callback. I had a similar error with a postgres client. I solved it by declaring the...
Read more >[Pool] This socket has been ended by the other party
Coming soon: A brand new website interface for an even better experience!
Read more >Connector/Node.js Promise API - MariaDB Knowledge Base
The pool reuses connection intensively, so this validation is done only if a connection has not been used for a period (specified by...
Read more >How to handle a socket hang up error in Node.js usually - Quora
Your socket is ended which thrown this error. ... Oracle new JS engine for Java (Nashorn) has been made node.js compatible from start...
Read more >[Urgent] n8n docker, self-hosted instance restarting every 3-8 ...
Error: This socket has been ended by the other party. opened 04:30PM - 06 Sep 21 UTC. lz-ui. `[connection] Ended [connection] Error: Error: ......
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Thanks for sharing @joe-angell, appreciated.
I feel like I must be missing something obvious, though. Surely mysql2 isn’t so fragile that we need to preclude every SQL query with a sentinel query, just to determine if the connection is still alive?
That’s going to eat significant network / IO to make and await a separate network call before sending down the ‘real’ query, not to mention double the query volume on the DB host. I’d argue that’s also the exact opposite of what the majority of developers would expect a pool to do for us-- i.e. eliminate dead connections, re-use good ones, and spark up new ones when the previous connection fails. This feels like an out-the-box expectation rather than a userland exercise.
I get @sidorares’s previous comment that retry logic could be handled in numerous ways, so might be deemed a userland activity. But an error like
This socket has been ended by the other party
, which throws only if a timeout occurs at the other end and/or a connection has dropped, is far lower-level than I think most ORM users expect to design for. In 99% of cases, the desired behaviour is going to be to re-connect and try again. Shouldn’t mysql2 be doing that for us?Unless I’m missing something, this is a fundamental flaw that affects pretty much every major ORM available for Node: Knex (and therefore Bookshelf), TypeORM, Sequelize and probably others, all use mysql2. I deployed a simple Node app to Google Container Engine using Google Cloud SQL as the DB; this error threw for every 10-15 min period of idle traffic that was subsequently followed by a later SQL attempt (Cloud SQL’s default idle timeout.)
Sequelize seems to have the greatest array of options for managing idle timeouts and various pool settings, that will probably fix the default timeout eviction. But the next time there’s a network drop between my instance and the Cloud SQL host, or some other non-explicit disconnect, it’s likely to surface again… and the default implementation is just to throw an ugly underlying Socket error. It feels unfinished, IMO.
I guess a way to fix would be to wrap each call in a retry function that can catch errors, detect if it’s just a connection issue, and then try throwing the error back… but that’s a pretty verbose solution for something I expected was the entire reason for pooling.
Am I missing something obvious, or is MySQL connectivity in every major ORM for Node really that fragile? How are others solving this?
I’m also running into this issue using
1.5.3
and'mysql2/promise'
. The error shows up consistently every morning after the connections in the pool are sitting idle for a while. After it errors, if I repeatedly attempt to get a new connection, it seems to errors on each stale connection, then removes the connection from the pool. And once all connections are gone, fresh connections start to work again. This is just an assumption. The only remedy is destroy connections, but this is obviously inefficient.I’m releasing connections with
connection.connection.release()
.pool.releaseConnection(connection)
andconnection.release()
don’t seem to work. This seems off?I’m not sure if it’s related but I’m also seeing ‘warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.’ warnings.
I’m adding error handlers with
connection.on('error', connectionErrorHandler);
and removing withconnection.removeListener('error', connectionErrorHandler);