Contents

A tapir looms in the distance

Adam Warski

30 Sep 2022.3 minutes read

A tapir looms in the distance webp image

The moment Java and JVM developers have long been waiting for came last week: Java 19 has been released. Among other changes (also significant!), the release includes a preview of VirtualThreads, which are part of project Loom.

We've been discussing Loom on this blog a couple of times already , but now we can play with the real thing in a stable release!

Loom recap

Just a very short recap: virtual threads allow starting thousands of threads in a cheap way. These threads are multiplexed into a much smaller number of platform threads. If you know green threads or fibers, it's the same idea, but implemented as part of the JVM.

As this is a JVM feature, we can immediately start using VirtualThreads in Scala. However, most Scala libraries use the "wrapper" (monadic) style, representing any side-effecting computations as a Future, Task or IO (depending on the library you are using). This allows getting the performance benefits of asynchronously running side effects, as well as provides a high-level concurrency API and additional compile-time guarantees.

The wrapper approach is opposed to Loom, which brings us back to the "direct" style of programming. Thanks to VirtualThreads, we keep the performance benefits, regaining useful stack traces and the ability to use regular control structures. But we also lose representing program fragments as values. I doubt there's an answer as to which approach is better (apart from "it depends"), but that's a topic for another debate.

All this boils down to the fact that we'll need to do some adjusting in the Scala ecosystem to integrate more seamlessly with Loom. Good news is both sttp client and tapir are ready!

sttp client

For sttp client, the case is really simple—just use one of the existing synchronous backends or the freshly introduced simple synchronous client:

import sttp.client3.{SimpleHttpClient, UriContext, basicRequest}

val client = SimpleHttpClient()
val response = client.send(
  basicRequest.get(uri"https://httpbin.org/get"))
println(response.body)

For tapir, we need to add synchronous interpreters. That's where tapir-loom comes in. The project defines two interpreters.

tapir Netty interpreter

The first one is based on Netty and reuses a lot of the code of the Netty+Future and Netty+cats-effect-IO interpreters, but with an identity effect wrapper (type Id[X] = X). Netty itself uses platform threads to implement its event loop (no changes here), but the server logic for each endpoint is run on a dedicated virtual thread and hence can freely call blocking operations without any penalties. Here's a simple example of a Netty-based, synchronous, blocking-logic server:

import sttp.tapir._

object SleepDemo extends App {
  val e = endpoint.get.in("hello").out(stringBody)
    .serverLogicSuccess[Id] { _ =>
      Thread.sleep(1000)
      "Hello, world!"
    }
  NettyIdServer().addEndpoint(e).start()
}

You can try putting some load on this app. Even in the presence of many concurrent requests, only a handful of platform threads will be used (mostly by Netty). The implementation of NettyIdServer is mostly convenience APIs to configure the server, but the most important part is submitting user-provided request-handling logic to an executor:

case class NettyIdServer[SA <: SocketAddress](
  routes: Vector[IdRoute], options: NettyIdServerOptions[SA]) {

  private val executor = Executors.newVirtualThreadPerTaskExecutor()

  (...)
}

tapir Nima interpreter

The second interpreter is based on the Helidon Nima project that implements an HTTP server (and more) using virtual threads exclusively. You might say it's a Loom-native implementation. However, it's in an early stage of development (currently an alpha1 is available), so probably not yet ready for production usage.

That doesn't stop us from implementing a Nima interpreter, of course! There's a bit more work than in the Netty case as we need to write the Nima <-> tapir translation layer, but it still is pretty straightforward. This time, we don't need to create a virtual thread executor explicitly, everything is handled by Nima.

As a result of the interpretation, we get an io.helidon.nima.webserver.http.Handler, which we can use to build a web server:

import io.helidon.nima.webserver.WebServer
import sttp.tapir._

object SleepDemo extends App {
  val e = endpoint.get.in("hello").out(stringBody)
    .serverLogicSuccess[Id] { _ =>
      Thread.sleep(1000)
      "hello, world!"
    }
  val h = NimaServerInterpreter().toHandler(List(e))
  WebServer.builder().routing(_.any(h)).port(8080).start()
}

Take a look for yourself—explore the tapir-loom code and let us know what you think!

Adopt a tapir

Although Loom interpreters are not yet an option, you can quickly bootstrap a tapir-based project using adopt tapir.

Have fun! :)

Blog Comments powered by Disqus.