question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Query timeout is not respected when host is not listening

See original GitHub issue

com.github.jasync-sql:jasync-postgresql:1.1.6

Steps to reproduce:

  • Set up a connection pool with connection & query timeout (100ms in my case)
  • Stop the server the client is pointing at
  • Run the provided code

Observed behavior: only 1 exception (this number depends on maxActiveConnections, e.g. if I set it to 5, I will likely get 5 exceptions in the output) is printed out and the rest of the coroutines freeze, until the server is back up (easy to test by stopping and starting a container that is running Postgres).

Expected behavior: the client should respect the configured query / connection timeouts and throw an error back to the caller

Known workarounds: surround each call to the client with withTimeout(), but it’s not clear if there are any side-effects to this solution

runBlocking {
    (1..50).map {
        GlobalScope.launch {
            try {
                println("Before SELECT 1")
                connectionPool.sendQuery("SELECT 1").await()
                println("After SELECT 1")
            } catch (e: Exception) {
                e.printStackTrace()
            }
        }
    }.joinAll()
}
io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: localhost/127.0.0.1:15432
Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused
	at io.netty.channel.unix.Errors.throwConnectException(Errors.java:124)
	at io.netty.channel.unix.Socket.finishConnect(Socket.java:251)
	at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:673)
	at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:650)
	at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:530)
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:470)
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
oshaicommented, Feb 11, 2021

What I will use for timeout is the queryTimeout. Adding a timeout for the waiting futures will operate in two cases that I can think of:

  • server is down
  • pool is exhausted: this will happen when many futures are waiting in queue

I think that because of the second scenario we need a bigger timeout, which is probably the queryTimeout.

Do you happen to know if the driver will discard those queries when the db is back up? Or it will try to run all of the queries that get queued (even though the client is not waiting for them to finish)?

As for your question, I think such timeout is good only as a workaround as it will keep those queries in the queue.

I am starting to implement that, will update when I have something working that can be tested as a snapshot.

0reactions
ilya40umovcommented, Feb 4, 2021

@oshai overall, I’ll be happy if it times out at all. 😃 In this particular case I would probably expect it to fail after connectionCreateTimeout passes (as internally the driver is trying to establish a connection) and get a Connect exception of some sort. But if worse comes to worst, it’s also okey to get a response back after queryTimeout (or even queryTimeout + connectionCreateTimeout) and a Query timeout exception of some sort.

Also, my testing is showing me that even if I use connectionPool.sendQuery("SELECT 1").get(5L, TimeUnit.SECONDS), I’m still seeing queries in the queue availableItems=0 waitingQueue=45 inUseItems=0 inCreateItems=5 which already timed out. So, I’m concerned that those queries will pipe up in the real live use-cases. Do you happen to know if the driver will discard those queries when the db is back up? Or it will try to run all of the queries that get queued (even though the client is not waiting for them to finish)?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Query timeout is not respected when host is not listening #223
Steps to reproduce: Set up a connection pool with connection & query timeout (100ms in my case); Stop the server the client is...
Read more >
Changing Connection Timeout for SQL Server connection has ...
If no server is listening at the port (e.g. the sql server isn't running) the default socket connection timeout is used, which is...
Read more >
155] Re: Healthcheck timeout not always respected - Pgpool-II
Our main problem remains though with the health check timeout not being > respected in these special conditions we have.
Read more >
Execution Timeout Expired. The timeout period elapsed prior ...
The timeout period elapsed prior to completion of the operation or the server is not responding. Normally it shouldm't take more than 2...
Read more >
A Complete Guide to Timeouts in Node.js - Better Stack
It is set to 0 by default which means no timeout, giving the possibility of a connection that hangs forever. To fix this,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found