question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Async implementation feedback

See original GitHub issue

We just released async support in Javalin 1.6.0!

However, I have a feeling that the current implementation isn’t perfect. Please use this thread to suggest improvements, or just to complain (then we’ll think up improvements).

The current (v1.6.0) implementation:

// usage: 
app.get("/") { ctx -> ctx.result(string) } // blocking
app.get("/") { ctx -> ctx.result(completableFuture) } // async

// what happens behind the scenes:
tryBeforeAndEndpointHandlers()
val future = ctx.resultFuture()
if (future == null) {
    tryErrorHandlers()
    tryAfterHandlers()
    writeResult(ctx, res)
} else {
    req.startAsync().let { asyncContext ->
        future.exceptionally { throwable ->
            if (throwable is Exception) {
                exceptionMapper.handle(throwable, ctx)
            }
            null
        }.thenAccept {
            when (it) {
                is InputStream -> ctx.result(it)
                is String -> ctx.result(it)
            }
            tryErrorHandlers()
            tryAfterHandlers()
            writeResult(ctx, asyncContext.response as HttpServletResponse)
            asyncContext.complete()
        }
    }
}

Users who don’t care about async are not affected by the functionality in any way.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:1
  • Comments:10 (4 by maintainers)

github_iconTop GitHub Comments

3reactions
joatmoncommented, Apr 15, 2018

I put together a sample async benchmark application based on my PR. The application can be found here: https://github.com/joatmon/spark-async-benchmark

I’ve included some sample scripts that demonstrate that, under heavy load, using async request processing for long requests improves the throughput of short synchronous requests by 750x.

When the short queries are competing for jetty threads with the long queries, the application is able to process 250 short requests in ten seconds. With the long queries handled by the async thread pool, the applications processes 188,454 short requests in ten seconds.

0reactions
tipsycommented, May 4, 2018

I got some great input from /u/0x256 on /r/java, I’m posting it here with his permission. In short, the current implementation is good for long running tasks with short responses (the problem we set out to solve), but there are some things we need to fix if we want Javalin to be truly async. We probably don’t want that, but the info he provided is great nonetheless:

That [our implementation] is not async IO, that is just delaying the response. Do not use JAX-RS as an example on how async should work in web frameworks, because they got it wrong (or incomplete, IMHO):

  • In JAX-RS, there is no way to read the request body in a non-blocking way. There is only InputStream and that blocks on slow clients.
  • You can delay the response (and not block a request thread while waiting for the response to be created) using @Suspended AsyncResponse, but once your response is ready, you have to write it all at once, and that will block if your response is bigger than the write buffer. No support for streaming large responses without blocking on slow clients or buffering to disk.
  • You might think that StreamingOutput is perfect for, you know, streaming output. Nah, it sucks: It reads from a (blocking) InputStream and writes to an unbounded and/or blocking write buffer (no back-pressure control). For large downloads, you either run into OOMs, end up buffering to temporary files, or block a thread most of the time (depending on the implementation).

Raw Servlets (3.1+) do support real async IO. You can read from the request and write to the response, both in a non-blocking way and with full (albeit really hart to get right) back-pressure control. The only problem is that it is really easy to 'drop the ball' with async servlets and end up in a state where you are mistakenly waiting for the client because you missed a listener call or forgot to call ServletOutputStream.isReady() until it returns false. Or to issue read or write calls more often then you are allowed to and get mixed results. Or re-use a buffer you are not supposed to re-use before some event happened. Writing a simple async echo client (just copy input to output) is really harder than it should be. But it works. Not so with JAX-RS (or Javalin).

A simple test for a web framework that claims to support async IO is to ask: Can I write an 'echo' application that accepts requests of arbitrary size, copies the bytes to the response body of the same request, and serves (significantly) more concurrent clients than there are threads? If you have blocking read/write calls or unbounded buffers (memory or disk) anywhere in your pipeline, then the answer is no. Slow clients will eventually exhaust your thread pool, heap or disk space.

I wrote an abstraction layer on top of javax.servlet.AsyncContext that basically offers this non-blocking API (simplified):

  • ctx.read(ByteBuffer, ReadyCallback<ByteBuffer>) Reads bytes from the request body into the supplied buffer and triggers the callback on success. The buffer will contain at least one byte as long as the request body is not fully consumed, and zero bytes after that.
  • ctx.write(ByteBuffer, ReadyCallback<ByteBuffer>) Writes bytes to the response body and triggers the callback once the bytes are fully written and the ByteBuffer can be re-used.

There are three timeouts: The read timeout is triggered if a read request is not completed in time. The write timeout works the same for write requests. The third timeout (application timeout) is triggered if the application does not issue a read or write request (and does not 'ping' the context) for a long time (prevents drop-the-ball bugs).

It is also an error to issue a read or write request if there is still a pending request of the same type. This prevents some out-of-order races and concurrency bugs and ensures that intermediate buffers do not grow out of bounds (pressure control).

It should be easy to use this API with CompleteableFuture if you prefer that.

Read more comments on GitHub >

github_iconTop Results From Across the Web

An overview of asynchronous design feedback and its benefits
When someone approaches you and asks about a particular design, it means they are expecting instant response from you. This is called synchronous...
Read more >
Feedback on async task examples and another possible ...
In this post I discuss some feedback on my previous posts about running async tasks on startup, including why database migrations were a ......
Read more >
c# - How to provide a feedback to UI in a async method?
I have been developing a windows forms project where I have a 10 tasks to do, and I would like to do this...
Read more >
Async Reviews 2022: Details, Pricing, & Features
It is easy to use and get along, also quick responsive. It was a smooth transition from other market tools. We are used...
Read more >
The definitive guide to effective asynchronous ...
Async cheat sheet · 1. Process documentation · 2. Brainstorming · 3. Gathering feedback · 4. Status updates / weekly standup · 5....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found