WaitQueue is filled up but it's never emptied
See original GitHub issueWe are using 0.20.3, and we are facing some problems with the Blaze http client. It has happened a couple of times where the client starts returning “Wait queue is full”, and it stays in that state forever. In order to make it work again, we simply restart the service. I know that this issue has surfaced and been solved in previous versions of http4s (e.g. https://github.com/http4s/http4s/issues/2193). I don’t know if this is the exact same problem but it might be related.
When trying to reproduce the bug, we encountered another problem that might or might not be related where the program just halts. This has been tested both on 0.20.3 and 0.20.12. The code used to try to reproduce the issue is:
import java.util.concurrent.atomic.AtomicInteger
import cats.effect.{IO, Timer}
import fs2.Stream
import org.http4s.Request
import org.http4s._
import org.http4s.client.blaze.BlazeClientBuilder
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext
object Main {
def main(args: Array[String]): Unit = {
val int = new AtomicInteger(0)
implicit val CS = IO.contextShift(ExecutionContext.global)
implicit val timer: Timer[IO] = IO.timer(ExecutionContext.global)
val timeout = 1 seconds
val program = for {
client <- BlazeClientBuilder[IO](scala.concurrent.ExecutionContext.global)
.withRequestTimeout(timeout)
.withResponseHeaderTimeout(timeout)
.allocated
c = client._1
status = c.status(Request[IO](uri = uri"""http://httpbin.org/status/500""")).attempt
_ <- Stream(Stream.eval(status)).repeat
.covary[IO]
.parJoin(100)
.take(1000)
.observe(x => x.flatMap(y => Stream.eval(IO(println(">>> " + int.incrementAndGet + " " + y)))))
.compile
.drain
s <- c.status(Request[IO](uri = uri"""http://httpbin.org/status/500""")).attempt
_ <- IO(println("STATUS = " + s.right.get))
} yield ()
program.unsafeRunSync()
}
}
Issue Analytics
- State:
- Created 4 years ago
- Reactions:3
- Comments:14 (14 by maintainers)
Top Results From Across the Web
MongoDB - The wait queue for acquiring a connection to ...
I just increase the minPoolSize=3000 to minPoolSize=5000 in mongo db connection. This will solve my issue. Also please note if you increase ...
Read more >Blocking I/O - Linux Device Drivers, Second Edition [Book]
If a process calls write and there is no space in the buffer, the process must block, and it must be on a...
Read more >queue — A synchronized queue class — Python 3.11.1 ...
The queue module implements multi-producer, multi-consumer queues. It is especially useful in threaded programming when information must be exchanged safely ...
Read more >CompSci-61B Lab 8, Stacks and Queues
The purpose of having special classes for these adaptations is to make it easier on ... Each wait queue has an upper limit...
Read more >6.2. Blocking I/O - Make Linux Software
This section shows how to put a process to sleep and wake it up again later on. ... In Linux, a wait queue...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hello folks It looks like the
PoolManager
is leaking active connections. Once it leaks 10 connections (that’s the default formaxConnectionsPerRequestKey
) the application hangs.When a request is completed, the
PoolManager
is supposed to reuse the connection to execute next request. If the next request in the queue is already expired, the expired request is completed with a failure as expected, but the connection gets lost (is neither reused nor closed, and the counter of active connections isn’t decreased).Here’s a reworked version of the original code which reliably hangs on my machine:
Some observations after playing with this:
numConnections
or theparJoin
parameter, or increasing the client’s max total connectionsWaitQueueTimeoutException
isn’t always printed, but there’s always aDEBUG org.http4s.client.PoolManager - Request expired
log message (and this never appears during successful runs)attempt
from the initial client call causes the program to fail with anjava.util.concurrent.TimeoutException: Request timeout after 1 ms
error, rather than hangingConsidering the last point, I’m not sure this is something that should (or can) be fixed in http4s and may instead be an expected result of using
attempt
andparJoin
together. TheparJoin
comment says:…so, it could be that this example app creates a deadlock where the outer stream is paused and none of the inner streams can finish evaluation – although I’m still trying to work out exactly how that could happen.
edit: possibly a duplicate of https://github.com/http4s/http4s/issues/2068