pg-cursor timeouts
See original GitHub issueHi, I think this is a bug within the pg-cursor library. I believe that when the operation between reading from the cursor takes more than X s, the cursor times out.
Here is the error I am getting:
error: canceling statement due to statement timeout
at Connection.parseE (<path>/node_modules/pg/lib/connection.js:614:13)
at Connection.parseMessage (<path>/node_modules/pg/lib/connection.js:413:19)
at TLSSocket.<anonymous> (<path>/node_modules/pg/lib/connection.js:129:22)
at TLSSocket.emit (events.js:321:20)
at addChunk (_stream_readable.js:294:12)
at readableAddChunk (_stream_readable.js:275:11)
at TLSSocket.Readable.push (_stream_readable.js:209:10)
at TLSWrap.onStreamRead (internal/stream_base_commons.js:186:23) {
name: 'error',
length: 109,
severity: 'ERROR',
code: '57014',
detail: undefined,
hint: undefined,
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'postgres.c',
line: '2985',
routine: 'ProcessInterrupts'
}
This is a pseudo reproduction code:
async function * Iterator () {
const cursor = await client.query(new Cursor(query, replacements))
const cursorP = {
read: util.promisify(cursor.read).bind(cursor),
close: util.promisify(cursor.close).bind(cursor),
}
while (true) {
const rows = await cursorP.read(size)
if (!rows.length) {
await cursorP.close()
client.release()
return
}
yield rows
}
}
const iterable = await Iterator()
for await (const x of iterable) {
// do stuff
await doStomething(x);
}
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Cursor timeout in postgres - PostgreSQL
Hi all. When examining strange behaviour in one of my programs I found out that. I must have somehow gotten into a timeout...
Read more >Cursor timeout in postgres - pgsql-general@postgresql.org
Hi all. When examining strange behaviour in one of my programs I found out that. I must have somehow gotten into a timeout...
Read more >Thread: statement_timeout vs DECLARE CURSOR
Hi, We've encountered some unexpected behavior with statement_timeout not cancelling a query in DECLARE CURSOR, but only if theDECLARE ...
Read more >psycopg2 cursor hangs up when query time is too long
When query takes more than 180 seconds the script execution hangs up for a long time. I use Python 3.4.3 and psycopg2 2.6.1...
Read more >cursor.noCursorTimeout() — MongoDB Manual
Session Idle Timeout Overrides noCursorTimeout · MongoDB drivers and mongosh · If a session is idle for longer than 30 minutes, the MongoDB...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Yeah…I think
statement_timeout
is used to cancel long running queries from the backend because even if your cursor is “paused” while your app processes the data that’s come in, the client is still taking up a “slot” within postgres and the query is considered to be “active.” Postgres can comfortably handle a few hundred connections, but they’re not a really light-weight resource from the server’s point of view and so statement timeout is supposed to help make sure nothing hangs out for too long running a query…as long as your cursor hasn’t finished reading, it’s still connected and has a query dispatched, and the timeout is still ticking. Similarly if you set a statement timeout of 1 second and doselect pg_sleep(5)
the query will be killed after 1 second even though it’s doing nothing but sleeping.Your example you listed here:
those are all separate queries - a cursor doesn’t work that way - it actually dispatches a single query and throttles the amount of results returned from the result set by only reading
n
rows out at a time…but any resources used to create that result set (such as temp tables or anything else which may have been created in the query) are still held, where as if you do kinda more old school “pagination” queries w/ limit/offset each one of those queries is discrete.I think yer best bet here is to relax your
statement_timeout
for the cursor. Is there a way you can do that? The postgres docs don’t recommend configuring one globally specifically because it can cause problems like this.no prob! lmk if you have any more issues, always happy to take a look. 😃
On Tue, Apr 28, 2020 at 11:45 AM Aleš Menzel notifications@github.com wrote: