Lagom is approaching end of life: possible migration paths

Adam Warski

27 May 2024.9 minutes read

Lagom is approaching end of life: possible migration paths webp image

Lagom, an opinionated microservices framework, is reaching end-of-life on July 1st, 2024. Beyond this date, no additional security or bugfix patches will be provided. Quite naturally, this raises questions about production applications that are based on the framework; keeping software up-to-date is first and foremost a good security practice, but it also increases its maintainability, e.g., when it comes to introducing new features.

For these and other reasons, end-of-life software ultimately needs to be replaced. Which in the case of Lagom raises the question: what should be the replacement? How to architect the migration path? What kind of options do we have?

No quick solutions

Lagom is an opinionated framework, without a clear successor, hence unfortunately there are no quick migration options. Whichever path you choose, it's going to involve rewriting at least some of the code (save for forking, see below). This means that migrating away from Lagom will be a time and resource-consuming process. Which makes careful planning of the migration, and the resources that it will involve, even more essential.


Lagom is open-source, which means that creating and maintaining a fork is always on the table. That's what happened when Akka, the backbone of Lagom, changed its license to a source-available one. The Pekko fork has been created, and is now successfully maintained by the community as an Apache project.

However, in the case of Lagom, there doesn't seem to be any initiative within the community to do the same. And maintaining a fork, while allowing you to continue using the framework without code changes, is itself time and resource-consuming.

If you're really invested in Lagom, that is, when Lagom is the backbone of your business, with a large number of services using it, then creating a fork is definitely a viable option, at least to prolong the transition window. Otherwise, it might be as costly as other migration paths, which might have lower future costs.

Survey your systems

Before considering specific migration options, it's good to do a survey of the systems at hand. Since we are probably talking about production systems, chances are high that you'll know quite a lot about the way they are used, the functionalities that work well and that could use improvements, compared to the initial design & implementation stage.

What are the traffic patterns for the services involved? Are these systems data-heavy or compute-heavy? Is the load consistently high, or are there traffic spikes? Which parts of the system are most used? Are there any services that should be split, or maybe the current topology is too fine-grained, and they should be merged?

When considering migration options, it's probably best to do a case-by-case analysis of each system. Chances are high that different services might end up with different migration targets. Just because a single framework was used when these services were implemented, doesn't mean that it's still the best answer; you not only have the usage data at hand, but the technology landscape has changed as well.

Migration targets: consolidating the tech stack?

Constraints often make it easier to make decisions: if we know that we can only choose from a limited subset, we can't bikeshed that much, or go back and forth on decisions about which technology, from a virtually unlimited list, to choose from. Additionally, there's often a couple of good, equivalent technological choices!

Hence, consider your migration targets. Firstly, what are the other technologies that are used in your organization? Such a migration might be a good occasion to consolidate your tech stack. This might mean migrating away from libraries, frameworks or languages altogether, or simply adopting a given approach for any new development.

Secondly, what are the strengths of your team—maybe they have experience with certain tools and libraries, which they can leverage when migrating the services. A good developer can pick up a new technology pretty fast, but if the knowledge is already there, why not leverage it!

Since Lagom offers APIs in Java and Scala, the first choice to make here would be whether to remain in the Java / Scala ecosystem. Microservice architectures do allow for polyglot deployments, however too many languages in an organization are often troublesome. Again, there might be an opportunity for consolidation. Secondly, this might also be a great chance to introduce updated language versions, to keep up to date in that respect as well. Both Java 21 and Scala 3 are available in LTS versions.

How to start?

This might sound trivial, but if possible, alway start small. Pick a small, low-impact service to migrate first. You'll learn a lot along the way, and be ready to tackle the higher-impact, more complex services. You'll also become familiar with whatever migration target you picked. Big bang releases rarely work well, so the small-step methodology should be the safer choice.

Composing a new service into a Lagom system

Now it's time for the good news. Lagom is architected in a way which makes the co-existence of Lagom-built and "other" services not only possible, but almost encouraged. That's achieved by relying on open standards, open-source software and industry best practices. Going from the high level, Lagom services should be deployed to platforms such as Kubernetes, OpenShift, or cloud-managed infrastructure. Any new service should be thus deployed in the same way.

Secondly, for inter-service communication, Lagom uses either HTTP calls (synchronous, request-response) or WebSockets (streaming), serialized using JSON or Protobuf. These are very standard and "safe" choices. Asynchronous communication in Lagom is realized using Kafka, which I think can also be called the de-facto industry standard, when it comes to messaging at scale. Hence whatever your migration target, you'll for sure find libraries to communicate using these protocols.

Kafka especially might be a valuable component here. If you do have Kafka deployed, and even if you haven't been using Kafka for asynchronous communication in the migrated service before, it might be useful to leverage it after the migration. It might be a great way to communicate between the migrated service and the rest of the system, which is still Lagom-powered. Note that this might also require changes in existing codebases. Such piecemeal migrations might as a result require some intermediate code changes—increasing the costs, but also decreasing the risks.

Finally, Lagom encourages the usage of third-party service locators (such as Consul or etcd. But even if you use the Lagom-provided one, you can register your own services there. Note that, the steps of migrating services should be separate from migrating the service locator, if that's required as well.

Migrating service descriptors

Lagom uses its own service description language, expressed as Java or Scala code. Quite obviously, after migrating this won't be available. Hence you might want to choose a different form of describing endpoints.

When it comes to Scala services, we of course recommend our own tapir library for defining, documenting with OpenAPI and exposing HTTP or WebSocket endpoints. In the case of tapir, you also use code (Scala) to create a type-safe description of an endpoint.

Other options include writing down OpenAPI or AsyncAPI schemas by hand, and generating service stubs based on that. Another route would be gRPC specifications with HTTP annotations—which might have the benefits of exposing the service using both HTTP and protobuf protocols. Finally, you might use a service definition language such as smithy.

Into the cloud

While we have so far discussed migrating to a self-hosted service, it's worth keeping in mind that a hosted variant is possible as well. Kalix is an as-a-service evolution of the Lagom framework. It offers a programming model that might feel familiar if you've been developing with Lagom, albeit with different APIs. It's available in the cloud billed based on usage.

Kalix's data model is centered around entities, which might be CRUD-like, event-sourced or replicated (using CRDTs). Hence, this is an even richer choice than what's by default available in Lagom. While there are some restrictions on how you can query the data, Kalix is designed with high-throughput, low-latency use-cases in mind. Additionally, in addition to the service abstraction that is known from Lagom, it offers long-running workflows.

One challenge might be integrating Kalix-based services into a wider system with self-hosted services, especially if the cloud providers used for both are different (this might matter e.g. because of latency and data transfer costs). However, once again, all interfaces and communication exposed by Kalix are standards-based (using protobuf or HTTP). Such integration should be then relatively easy to achieve.

Migrating a service

That's probably the crux of the whole migration: how to migrate the services. However, there's as many answers as there are services. The easiest task is with services which weren't using any of the clustering capabilities of Lagom, and have just been written using the framework to keep the tech stack consistent. Hence, if we have a service which exposes a CRUD-like REST or HTTP API, we can in fact use any of the leading libraries or frameworks, depending on our target tech stack.

Of course, things are getting more complicated with services which are using the event sourcing/CQRS pattern. Here, you have to consider: did event sourcing, with the tradeoffs it brings, carry its weight? If so, great—the best option will be to lower the level of abstraction by one degree, and use Akka or Pekko Persistence directly. This is what Lagom uses behind the scenes. Once again, the programming model should feel familiar, with somewhat different APIs.

If you decide to leave the event sourcing approach, the good news is that because the whole history is there, it should be relatively easy to populate the new data storage (which, maybe, is a good argument to keep using event sourcing?). Note that you might also use alternative data storage options. While the default for Lagom is Cassandra, Akka/Pekko Persistence offer alternative drivers, e.g. for PostgreSQL. Or you might just go directly with a traditional relational database, self-hosted or one of the scalable, managed cloud options. Some middle ground between NoSQL-event sourcing and no-history SQL might be to use SQL-based event sourcing.

To cluster, or not to cluster

Many of Lagom's functionalities rely on the services being deployed as part of a cluster. These include scalability and resiliency, where multiple copies of a service are deployed on many nodes, and service discovery routes requests appropriately. But most importantly clustering is required for persistence (when using event sourcing), as well as pub-sub.

If you don't use event sourcing, but have only used Lagom's clustering for the scalability and resilience, you might want to reconsider, if you do need this additional clustering layer. If you deploy to service orchestrators such as Kuberenetes, they have their own clustering layer, which provides similar functionality, such as cluster singletons or load balancing incoming requests. And each clustering component which you can take out of your deployment is a win—as it simplifies operations.

In fact, one of the few Akka/Pekko persistence alternatives, ShardCake, uses Kubernetes-provided clustering services instead of a custom cluster. This makes it also a candidate to consider when migrating event-sourced services.

Summing up

Migrating a fleet of Lagom services will almost certainly involve a rewrite of the code, save for forking the framework itself. However, you might view such a migration as an opportunity to consolidate or upgrade your tech stack, or better slice the system's functionalities into individual services.

At SoftwareMill, we provide technical expertise at all of the levels mentioned above: starting with the Java and Scala languages and their ecosystems, through architecture and distributed systems consulting, to Kubernetes and cloud deployments. We are well known for the level of attention to detail in each project that we work on, making us an ideal partner to guide organizations through migrations such as above. Let's talk so that we can provide our no-frills assessment of what migrating away from Lagom might involve in your organization!

Blog Comments powered by Disqus.