Cancelling HTTP requests on the JVM
HTTP is one of the primary ways of exposing applications to the outside world and communicating microservices within a system. Almost every backend service, at some point, performs an HTTP request.
However, in certain situations, we are no longer interested in the results of an HTTP call. This might be due to a couple of reasons, for example:
- we might perform a couple of HTTP requests in parallel, further processing the result only of the one that responds fastest (a race)
- a downstream HTTP request might take too long, causing a timeout response to be sent upstream or a default value (cached or computed) to be used instead
The most common option of dealing with HTTP calls in which we lose interest is to simply abandon them. They run to completion, but the response is never used. Such a response might even be read into memory entirely and parsed, e.g. into JSON, before being abandoned—wasting both memory and the CPU. Can we do better?
Turns out that we usually can. Most of the JVM HTTP clients we looked at offer an option to cancel an ongoing HTTP request. Let's examine how this works.
What does canceling an HTTP request mean?
Because of how HTTP is designed, once an HTTP request is sent, we usually can't do anything to stop processing it on the server side. Hence considering the possible effects (if it's e.g. a POST
) that our request might have on the server, we have to assume that the request ran to completion.
So what's the point of canceling the request? Apart from processing the request on the server, there's also the stage of sending the response back to the client. This is especially important if the response has non-trivial size. Furthermore, as we already mentioned, the client can do some further parsing of the response, which should be skipped, if it will be discarded anyway.
Hence the goal in canceling an ongoing HTTP request will be to prevent transmitting the response and any parsing or reading it into the client's memory.
Our test
We ran a simple test: a single endpoint and a single request. After receiving a request, the server waits for 2 seconds and sends back a 100MB response. The client sends the request but decides to cancel it after 1 second (no response data should be transmitted to that point). What "canceling" means depends on the particular client implementation that is being used.
We tested the following clients: async-http-client, Java's built-in HttpClient, OkHttp and http4s. To compare the behavior, we used three servers, based on Jetty, Http4s and akka-http.
The source code for both the clients and servers used are available on GitHub.
TCP: What happens when a request is canceled
At the TCP level, the flow is always the same, regardless of the client used. For now, we'll focus on HTTP/1.1.
When the request is canceled, the underlying TCP connection is closed on the client's side by sending a FIN
packet. The server acknowledges this, but the connection remains half-open (the server doesn't close its side yet).
In our test, we cancel the request after 1 second, and the data starts to be sent by the server after 2 seconds. When that time passes, and the server sends the first data frames, it doesn't receive acknowledgments for them: on the client's side, the socket is closed. Instead, the client responds with RST
(reset) packets.
At that point, the server stops sending further response data. In the test usually about 32K of data was transmitted (instead of the entire 100MB), which is a huge saving.
Note that the server only "finds out" that the socket is closed when it attempts to send some response data. As mentioned in the introduction, there's no way of interrupting the server-side processing, which usually runs to completion.
Here's a capture of one test run done using Wireshark:
After canceling a request, the connection is closed, regardless of the
Keep-Alive
headers. The fact that a new connection will have to be established instead of reusing one that is already open might impact performance negatively. Hence it can also make sense not to cancel the requests after all, instead keeping the connection open.
async-http-client
Here's an example test run performed using async-http-client. The result of executing a call is a custom Future
implementation, which contains a cancel
method (the code is written in Scala, but an equivalent implementation can be done in any JVM language):
import org.asynchttpclient.DefaultAsyncHttpClient
import org.slf4j.LoggerFactory
import sys.process._
object Run extends App {
val log = LoggerFactory.getLogger(this.getClass)
val client = new DefaultAsyncHttpClient()
val p = "/Applications/Wireshark.app/Contents/MacOS/wireshark -i lo0 -k -a duration:10".run()
Thread.sleep(3000) // wait for Wireshark to start
log.info("Sending ...")
val f = client.executeRequest(
client.prepareGet("http://localhost:8080/wait").build())
log.info("Sent ...")
Thread.sleep(1000)
f.cancel(true)
log.info("Done.")
}
We're additionally starting a Wireshark capture at the beginning. The crucial call is, of course, f.cancel(true)
. Attempting to .get
the result afterward will result in an exception.
In async-http-client, there's also a
.abort(Throwable)
, however, it seems to run only part of the logic that is being run when canceling; hencecancel
appears to be the better option. Both have the same effect at the TCP level, though.
Since the response data is not received, it's not parsed or processed further on the client side.
Partner with Scala Experts to build complex applications efficiently and with improved code accuracy. Working code delivered quickly and confidently. Explore the offer >>
HttpClient
The code when using the HttpClient
that's built into Java is very similar. Sending a request using .sendAsync
also results in a Future
implementation, which can be canceled.
However, canceling is a no-op up to JDK 15. From JDK 16 this works exactly the same as when using async-http-client.
HttpClient using Loom's virtual threads
Virtual threads are a preview feature of Java 19, and are set to mostly eliminate the need for working with asynchronous code and Future
s. Can we cancel an ongoing, synchronous HTTP request? Turns out that this works just fine. Tested on JDK20:
import org.slf4j.LoggerFactory
import java.net.URI
import java.net.http.HttpResponse.BodyHandlers
import java.net.http.{HttpClient, HttpRequest}
import scala.sys.process._
object Run extends App {
val log = LoggerFactory.getLogger(this.getClass)
val client = HttpClient.newHttpClient()
val p = "/Applications/Wireshark.app/Contents/MacOS/wireshark -i lo0 -k -a duration:10".run()
Thread.sleep(3000) // wait for Wireshark to start
log.info("Starting virtual thread ...")
val t = Thread.startVirtualThread(() => {
log.info("Sending ...")
val r = client.send(
HttpRequest
.newBuilder(new URI("http://localhost:8080/wait"))
.GET().version(HttpClient.Version.HTTP_1_1)
.build(),
BodyHandlers.ofString()
)
log.info(s"Received, body length: ${r.body().length}")
})
Thread.sleep(1000)
log.info("Interrupting ...")
t.interrupt()
log.info("Done.")
}
Here, the crucial operation is t.interrupt()
. This cancels the ongoing HTTP request, with effects as described above.
OkHttp
Both the synchronous and asynchronous versions of sending a request using OkHttp work similarly as described above. In case of an asynchronous request, we get a Future
-like value, which can be canceled. In the synchronous case, the thread running the request can be interrupted.
However, when interrupting an OkHttp request, the thread's interrupted flag remains set. Hence, any subsequent blocking operation (e.g. in a finally
block) will immediately throw an InterruptedException
, which might not be the desired outcome.
Http4s
For comparison, let's look at a request sent using a library from the cats-effect ecosystem. Cats-effect is a concurrent programming library and an asynchronous runtime (available for Scala on the JVM, JS, and Native). It has its notion of lightweight threads (called fibers) and requires a slightly different programming style.
Here's our client code:
import cats.effect._
import org.http4s.ember.client.EmberClientBuilder
import org.slf4j.LoggerFactory
import scala.concurrent.duration.DurationInt
import sys.process._
object Run extends IOApp {
private val log = LoggerFactory.getLogger(this.getClass)
override def run(args: List[String]): IO[ExitCode] = EmberClientBuilder
.default[IO]
.build
.use { client =>
for {
_ <- IO("/Applications/Wireshark.app/Contents/MacOS/wireshark -i lo0 -k -a duration:10".run())
_ <- IO.sleep(3.seconds)
_ <- IO(log.info("Sending ..."))
f <- client
.expect[String]("http://localhost:8080/wait")
.start
_ <- IO(log.info("Sent ..."))
_ <- IO.sleep(1.second)
_ <- IO(log.info("Cancelling ..."))
_ <- f.cancel
_ <- IO(log.info("Done."))
} yield ()
}
.as(ExitCode.Success)
}
The for
-comprehension causes the nested effect descriptions to be run in sequence. The .start
operation starts running the computation described by the effect on which it's being invoked in a new fiber in the background. Here, this amounts to performing the HTTP request concurrently.
The fiber can be canceled using f.cancel
—and that's what we are using to implement canceling of an HTTP request.
There are some important differences between interrupting a (virtual) thread and canceling a fiber. One of them is that by default, canceling a fiber always back-pressures, that is, waits for the fiber to run to completion (there might be e.g. some finalizers). With a thread,
.interrupt()
needs to be followed by.join()
to achieve the same effect. This design decision in cats-effect allows it to achieve even higher resource safety, compared to alternatives.
HTTP/2
When the same test is run using HTTP/2 (Java's HttpClient and Jetty support the protocol), there's one important difference: after canceling a request, the connection is not closed. Instead, a RST_STREAM
packet is sent, but this only closes one of the streams that is being multiplexed onto the given connection.
This way, connections can still be re-used, avoiding the overhead of establishing a connection when a request is sent while keeping the benefits of avoiding unnecessary data transmission.
Streaming
We also tested an additional scenario involving an endpoint that streams data incrementally to the client. In our test, this amounted to sending 1KB of data fragments every 100 milliseconds for 3 seconds. The client canceled the request after 1 second, as before.
The results are very similar: the client received the first second's worth of data, then a FIN
packet was sent, closing the socket on the client's side. Upon receiving the next data fragment, the client responds with an RST
.
On the server side, we implemented the streaming in two variants, using akka-http and http4s, as these libraries allow us to define streaming endpoints effortlessly. When the client closes the connection, the stream is completed with an IOException: Broken pipe
.
Hence in this scenario, canceling the request on the client side propagates to the server, avoiding unnecessary work of creating subsequent stream elements.
Takeaways
Summing up what we found out in our investigation above:
- http clients that do support cancellation are async-http-client, Java's built-in HttpClient (from JDK 16), OkHttp and http4s
- http clients that do not support cancellation include Java's built-in HttpClient (before JDK 16) and akka-http
- canceling an HTTP request will close the connection when using HTTP/1.1. In HTTP/2, the connection is kept open and can be used to send other requests
- the server will only "find out" that the connection is closed after attempting to send the first frames of the response; hence some data will always be transmitted
- when using project Loom (Virtual Threads), requests sent using HttpClient or OkHttp can be canceled by interrupting the thread on which they run; however, OkHttp keeps the interrupted flag set
- when the response data is created incrementally on the server, cancellation might propagate from the client, closing the server-side stream
And to wrap up, some self-promotion: if you're looking for the best HTTP client API out there using the best FP+OO language, we've got you covered—check out sttp :). Needless to say, whenever possible, sttp integrates with the underlying http client (such as the ones described above) to support request cancellation.