Contents

Migrating from Akka HTTP to tapir

Migrating from Akka HTTP to tapir webp image

In the light of the recent announcement by Lightbend that the entire Akka stack is going to be relicensed from an open-source licence to a source-available one, I'm sure many development teams are assessing their options and considering next steps.

Akka was used in a number of situations, ranging from local concurrency, HTTP servers, streaming, to managing clusters, running distributed computations, and implementing event-sourcing. Depending on your use-case, the right course of action might be to purchase the licence (which is needed if your company is large enough) and continue using Akka, or it might make more sense to replace Akka in your system.

For example, if you are only using Akka in the HTTP layer, there's a number of supported open source HTTP servers available. Lightbend will support the current Akka version for one year (until September 2023), so there should be enough time to make an informed decision and if needed, complete the required redevelopment.

One of the open-source projects that we lead the development of, tapir, might be a viable alternative to Akka HTTP. Here's a short guide as to how you might approach migrating your application from Akka HTTP to tapir.

Step 1: tapir and Akka HTTP side by side

Tapir is a declarative, type-safe web endpoints library. It offers a programmer-friendly API to describe endpoints and interpret them as a server, OpenAPI documentation or a client.

Tapir doesn't implement a full web server itself. Instead, it integrates with a number of server implementations. Hence, the same high-level API can be used to expose your endpoints using a number of different technologies.

One of such technologies is Akka HTTP. That is, you can describe an endpoint using tapir's API and interpret it into an Akka HTTP's Route. Such a Route can be then composed with other routes, defined using Akka HTTP's API directly.

This allows to perform the Akka HTTP -> tapir migration gradually.

For example (the full source code with all the imports can be found here), we can define a GET /hello?name=... endpoint, and interpret it as an Akka Route as follows:

import akka.http.scaladsl.server.Route

val helloWorldRoute: Route = {
  import sttp.tapir._
  import sttp.tapir.server.akkahttp.AkkaHttpServerInterpreter

  val helloWorld: PublicEndpoint[String, Unit, String, Any] =
    endpoint.get.in("hello").in(query[String]("name")).out(stringBody)

  AkkaHttpServerInterpreter().toRoute(helloWorld
    .serverLogicSuccess(name => Future.successful(s"Hello, $name!")))
}

The same can be defined equivalently using the Akka HTTP:

import akka.http.scaladsl.server.Route

val helloWorldRoute2: Route = {
  import akka.http.scaladsl.server.Directives._

  get {
    path("hello2") {
      parameter("name".as[String]) { name =>
        complete(s"Hello, $name!")
      }
    }
  }
}

Then, both can be combined and exposed in a single server:

val combinedRoutes = {
  import akka.http.scaladsl.server.Directives._
  helloWorldRoute ~ helloWorldRoute2
}

Http().newServerAt("localhost", 8080).bindFlow(combinedRoutes)

Bonus: expose documentation of your endpoints

While the tapir definition of an endpoint is slightly more verbose, there's something you can get in return. As we capture the structure of the endpoint separately from the logic that should be run when the endpoint is invoked, we can use that information to generate OpenAPI documentation for our endpoints.

In Akka HTTP, the routes are fully dynamic. E.g. after extracting the value of a query parameter, you can use different routes based on the run-time value of that parameter. In tapir, the structure of the endpoint is static and is defined before and separately from the server-side logic.

For example, here's how you would expose documentation for the /hello endpoint above (obtaining one more Akka HTTP Route):

val swaggerUIRoute =
  AkkaHttpServerInterpreter().toRoute(
    SwaggerInterpreter()
      .fromEndpoints[Future](List(helloWorld), "Hello", "1.0.0")
  )

Docs example

Step 2: rewrite routes into endpoints

The basic building block using which the request-side structure of an endpoint is defined is the same when using tapir and akka-http. In tapir, we've got path inputs such as .in("hello"), in Akka we've got the path("hello") directive. In tapir, there's the query[String]("name") input, and in Akka, there's the query("name".as[String]) directive. Similarly for headers and other data that can be extracted from the request.

The situation is slightly different for bodies. In tapir, the declaration that a request or response body should be serialised as JSON is more explicit than in Akka HTTP. In tapir, we've got e.g. jsonBody[User], while in Akka, you just pass in a User instance, e.g. complete(someUser), and this will use the Marshaller that is in scope.

While both approaches rely on implicitly available json encoders/decoders (plus in tapir, the Schema for documentation), arguably, in tapir the "developer experience" is better as no implicit conversions are involved, which makes error reports more precise.

As for JSON support, tapir integrates with the most popular Scala JSON libraries. You might need to add an additional dependency to get the integration layer, but you should be able to use the same encoders/decoders for your data types that you've used with Akka HTTP.

The response-side structure is defined in tapir in a very similar way as the request-side. Instead of inputs, we define outputs. In fact, in tapir, the same values describing headers and bodies can be used both as an input and as an output. This is in contrast to Akka HTTP where the response is often free-form—you just return an appropriate response instance, or provide arguments to complete. Hence this aspect might require more attention when rewriting a route to an endpoint.

Error handling

The way errors are handled is quite different in tapir and Akka HTTP. First, in tapir, there are dedicated error outputs, which should be used for endpoint-specific, business-logic-level errors. For example, if a request tries to create a user and one with the given id already exists in a database, you might return a "conflict" error.

How these business-logic-level errors map to outputs is defined as part of the endpoint description. Very often there might be multiple error variants, each with its own status code. In such situations, oneOf outputs might be helpful. This is more complex than simply returning a response in Akka HTTP with the right status code, however, it does capture the endpoint description in its entirety and allows creating precise documentation.

For input format validation errors, such as a query parameter that is supposed to be a number but the user provided a string, there's the DecodeFaillureHandler. This interceptor plays a role similar to Akka's RejectionHandler. It is defined globally (as part of the interpreter configuration) and allows specifying what should happen when an input can't be decoded successfully. Furthermore, primitive data types can be validated in a stateless way. Validation errors are also handled as part of the decode failure handler.

Finally, there's an ExceptionHandler that plays an identical role as its counterpart in Akka HTTP. We even have some more information—such as the endpoint that caused the exception—which might provide better diagnostic information.

Security

To implement security in an Akka HTTP application, quite often custom directives are used. These will need to be migrated as well. Tapir has a different mechanism, which allows extracting some common logic into a "base" secured endpoint and then refining this description to define other endpoints.

It's not as convenient as with Akka—where we could just define a top-level directive, which extracts some data from the request, possibly performing a lookup in the database—however, tapir is in some ways constrained by the requirement for the endpoint description to be fully static.

To define security-related inputs, tapir has a dedicated section in the endpoint description. It can contain both regular inputs (such as a path prefix) and security-specific inputs, such as an Authorization header input.

Then, partial security logic, which maps the authentication inputs into some application-specific value (such as a User) can be provided, using the .serverSecurityLogic function. The docs contain more information on this subject as well as some examples.

Interceptors

When migrating directives that implement cross-cutting concerns, tapir's interceptors might be useful. A number of interceptors is available out-of-the-box, such as ones providing logging, metrics, exception handling or CORS functionality. Custom ones can be implemented as well.

While not as general as Akka directives, the interceptors can plug into the process of handling a request, either at the request level (being called once per request), or at the endpoint level (being called once per endpoint, when decoding the request failed or succeeded).

Streaming

Special care must be taken when migrating code that uses non-blocking, "reactive" streams. During the gradual migration, you can define tapir endpoints that use request/response Akka Streams-based streaming bodies.

However, this leaves us with endpoints that still depend on a fragment of the Akka stack. Hence, as a next step, you might want to replace those streaming bodies with something else.

One option is to use fs2 or zio-streams. However, this also means that for the server logic, you'll have to use IO (from cats-effect) or ZIO, instead of Scala's Future, as only such combinations are supported out-of-the-box.

As for the interpreter support, currently the Netty one doesn't support streaming at all (but we do have plans to change that). The Armeria backend works with any reactive streams implementation, using the interfaces defined in the spec directly. The Vert.X interpreters support streaming, however, only in their cats-effect and zio versions.

That being said, nothing prohibits creating an interpreter that would be Future-based and support fs2 streams, for example. However, while all the components to assemble such an interpreter are out there, creating one would require some additional work.

Step 3: use a different interpreter

Once you have fully (or almost fully) migrated your endpoints to tapir, you can use a different interpreter, instead of Akka HTTP, hence removing that dependency. Tapir currently comes with 4 interpreters that support scala.concurrent.Future.

We've seen one already—the Akka HTTP one. The others are:

While the first one (Netty) is the most straightforward one to use, it's also the youngest, and still under development. The Vert.X and Armeria ones are more mature, however, they come with their own dependencies.

Good news is that if there's some endpoint you couldn't reimplement using tapir's API, you'll definitely be able to define it using the "native" APIs of Vert.X, Armeria, or simply providing a ServerRequest => Future[Option[ServerResponse]] function to the Netty one.

For example, here's our /hello endpoint interpreted using the Netty interpreter:

object HelloWorldNettyServer extends App {
  val helloWorld: PublicEndpoint[String, Unit, String, Any] =
    endpoint.get.in("hello").in(query[String]("name")).out(stringBody)

  val helloWorldServerEndpoint = helloWorld
    .serverLogicSuccess(name => Future.successful(s"Hello, $name!"))

  val bind = NettyFutureServer()
    .addEndpoint(helloWorldServerEndpoint)
    .start()

  StdIn.readLine()
  Await.result(bind.flatMap(_.stop()), Duration.Inf)
}

Optional: get help

If you're stuck when migrating some directive or route from Akka HTTP into tapir or one of the alternate servers (Vert.X, Armeria), or if you are struggling as to how to use a given tapir feature, there's a couple of options.

First of all, tapir has quite comprehensive documentation, which contains deep dives into a number of topics, ranging from explaining how tapir works, to implementing security, web sockets, and supporting various data types. There's also a number of examples in the source code repository.

Secondly, there's a number of people on tapir's Gitter channell where you can ask your questions. Alternatively, many people use GitHub Issues to report situations where tapir doesn't work as they would expect.

Finally, our company, SoftwareMill offers development and consulting services. We've participated in projects using probably every Scala stack in existence, so chances are high that we'll be able to help in your case as well.

Develop your next feature with tapir!

Akka has some really great technology that has been developed and supported by Lightbend for a long time. I think we're all grateful for their work. But whether we like it or not (I definitely don't!), open-source Akka came to an abrupt end. Unfortunately, we have no other choice but to accept the business-motivated decisions of Akka's maintainers, and in some situations, part ways with that stack.

Luckily, we've also had the opportunity to learn a lot while using Akka, and we can now leverage that knowledge to take the next step. Tapir might have some answers, if you are looking for a migration path away from the Akka HTTP; hopefully, you'll find that it works well for your use case!

And as an added bonus: tapir fully supports Scala 3 and is cross-built on all supported platforms (JVM, JS, Native). If you'd like to explore, generate a project on adopt-tapir!

Check: What to do with your End Of Life Akka?

Blog Comments powered by Disqus.