Contents

First look at Akka Serverless

First look at Akka Serverless webp image

Akka Serverless exposes part of the open-source battle-tested Akka framework as an as-a-service offering. If you ever wondered what a fully managed version of Akka Cluster+Sharding+Persistence would look like, here's the answer! If you don't know what Akka is at all, don't worry — we'll cover all the necessary details.

Akka Serverless is a managed runtime where you can run your (micro)services and expose them to the outside world. Similarly to other serverless offerings, you don't have to worry about scaling, concurrency, provisioning servers, etc. However, the main feature of the system is bringing together code and state.

It's a different model compared to what we're dealing with using "traditional", stateless serverless. There, we've got separate application and database tiers with explicit communication between the two. In such a setup, whenever your function starts up, you do need to worry e.g. about establishing database connections. Futhermore, the database needs to be explicitly queried to fetch exisiting or persist new data. This adds boths complexity and latency.

That's where Akka Serverless differs. With stateful serverless, we're taking the next step when it comes to managed services. The lifecycle of both the code and the data is now handled externally. As a result, we can design scalable, resilient, and performant services, without the overhead of having to deploy, maintain, and troubleshoot a cluster (which is always painful — whatever the technology at hand). These pains are now the responsibility of the Akka Serverless team. Additionally, we also get a simple programming model and the possibility to leverage the benefits of event sourcing.

However, these benefits come at the cost of some constraints in how flexible we are in accessing our data. Some tasks that have been hard become easy, but also some previously almost trivial tasks now require additional work. Let's take a more detailed look! Keep in mind that this is version 1.0 beta of Akka Serverless, so it's expected that some things might break or not yet be polished.

Programming model

The basic unit of deployment in Akka Serverless is a service written in any of the supported languages (there are official SDKs for Java and JavaScript, with more community SDKs available and more official ones coming). At the heart of each service, there's a single type of entities that are being manipulated by the service. If you're familiar with DDD, the entity will most often be an aggregate root. Entities might for example be users or products.

The entity is a domain object that contains the data that our service deals with. Each entity is uniquely identified by some sort of an identifier, such as user_id or product_id.

While you could have multiple entity types in a service, the Akka Serverless documentation recommends that there's always a single one, as otherwise, routing requests in an optimal way wouldn't be possible. Since this is stated as a best practice, we won't even attempt to do otherwise.

Each request arriving at our service (or in serverless terms, each function invocation) should refer to a single entity. Hence, it should contain the unique entity identifier. For example, if we are storing user data in the service, the request should correspond to a single user and contain something like a user_id field in its data.

When interacting with individual entities, their state is always loaded into memory. Loading from and storing state in persistent storage is fully handled by Akka Serverless. At any given time, each entity is guaranteed to be present on at most one instance of our service. If an entity is needed but not yet present anywhere, Akka Serverless will first load its data (if any) and create the entity on some instance of our service. Symmetrically, when an entity is no longer needed (e.g. there were no requests for that particular entity instance for a long time), it is removed from memory (but its state is persistently stored).

By leveraging the entity id extracted from the request's data, Akka Serverless appropriately routes requests to entities. Hence when the request processing logic is invoked, we already have the entity's data at hand.

Moreover, the concurrency model guarantees that for a single entity, requests will be processed sequentially. Hence we don't have to worry not only about persistence, routing, data sharding, but also about concurrency. That's a lot of concerns catered for behind the scenes!

However, we might also begin to notice some of the constraints mentioned in the beginning. Since each request has to correspond to a single entity, any logic that requires some sort of aggregate processing has to be modelled as a separate service, using separate entities to store the aggregate data. We'll look at this more closely later. This might be seen as an inconvenience, but on the other hand, it might be beneficial for domain modelling as it forces separation of individual-entity and aggregate logic into different services.

Technicalities

Before diving deeper into the internal components of an Akka Serverless service, let's take a look at some of the technical choices made, which will also set the stage for examples.

Serialisation

The serialisation and request format of choice in Akka Serverless is Protocol Buffers. By default, you are encouraged to model all of the incoming request data as well as entity state as protobuf messages. Moreover, entities and actions are exposed as gRPC services (here the term service is overloaded, a single Akka Serverless service typically contains multiple gRPC service definitions).

This provides high-performance, low-overhead serialisation and transport, as well as an opportunity to work with multiple languages. Theoretically, all that we need is a working Protobuf/gRPC library, which is present for most popular languages out there.

Packaging and language support

When deploying a service, we need to provide a Docker container. The only requirement for deployment is that whatever runs in the Docker container speaks the Akka Serverless protocol. That said, currently dedicated Akka Serverless SDKs include the official Java and JavaScript ones. There are also community implementations for Go, Dart, and Kotlin, and I think we might expect more soon.

HTTP & JSON?

We've got protobufs and gRPC but what about HTTP and JSON? Luckily, all services are by default exposed both using gRPC and HTTP interfaces. Each gRPC service corresponds to an HTTP endpoint which can be invoked using the POST method, and a JSON body corresponding to the protobuf message.

The HTTP endpoint paths can be customised, and some fields from the request body can be made part of the URL path, using gRPC HTTP transcoding annotations.

Persistent storage

The whole point of Akka Serverless is that you don't have to deal with persistence yourself! It's not something you can influence, but for the curious, behind the scenes, Google Spanner is being used. This is visible in the deployment logs when creating a new service.

State models

Let's go back to the programming model that Akka Serverless offers. When it comes to storing the state of entities, we have two choices.

We can go with a more traditional, CRUD-like, approach where each entity has an associated state that can be modified by requests. These are value entities: basically a big, scalable, key-value store. The keys are entity ids and the values are arbitrary. Each request, during processing, receives the optional current state (which will be empty only if the entity is brand new). As a result of processing the request, the state might be updated to a new value. Akka Serverless guarantees that before processing subsequent requests, the state will be persistently stored.

Another option are event sourced entities where we work with commands and events. Each incoming request is a command. A command handler can read the current in-memory state to validate the request and as a result of its logic, emit zero, one or more events. No state should be written in a command handler as any such state updates will be lost when recreating the entity from events.

Separately, in reaction to the emitted events, event handlers are run — that's where the in-memory state can be written. The event handlers are also invoked when the entity is being reconstructed from persistent storage, before it handles any requests (if an entity was removed from memory, if the service has been redeployed, or if there was some sort of infrastructure failure). Hence, the event handlers shouldn't have any side-effects and deal only with updating the internal state.

As an optimization, state snapshots can be stored so that not all events have to be replayed when reconstructing the entity.

We won't dive into the details of the benefits of event sourcing but when using this approach, you not only get an audit log for free. You also have the possibility to update the code that creates the internal entity state at any time (which is usually some form of aggregation/projection of all the events), run historical queries, and stream the changes to other services in your system.

Actions

Apart from an entity, which is the main component of a service, there are also other components we can use when implementing our service.

First, we've got actions which are stateless functions. Actions can be invoked directly, transforming the incoming data and forwarding the request to another component (such as another action or an entity).

Actions can also be invoked when there's a state change of a value entity or a new event in an event-sourced entity's journal. This allows us to perform some side-effects (such as calling an external API, sending an email or pushing data to a queue). Akka Serverless guarantees that an action will be invoked at least once for each state change and for each event.

Note that when an action is invoked by a state change or an event, this will happen asynchronously — after the initial request (which created the event or triggered the state change) completes. This means that we're dealing with eventual consistency. As mentioned when discussing the concurrency model, only the internal state of a single entity is fully consistent.

Effects

Speaking of side effects, these can be run after the processing of an action or entity request completes, either synchronously or asynchronously. However, such effects are not guaranteed to execute. It is possible that a state change will be persisted but an effect will fail to run (because of a bug in the effect implementation or because of infrastructure failure). If the effect was synchronous, failure of the effect will be propagated to the caller. Hence the caller might see an error but the state changes will be persisted anyway.

If we do need processing guarantees, we have to resort to writing actions which listen for entity changes or using message queues, which are discussed below.

Views

Views allow creating projections of an entity's data. The data source for a view might be the state changes of a value entity or the events for an event sourced entity. An event might be transformed into the view model, given an optional current view state — similarly to when processing requests for value entities.

Note that each view row corresponds to an individual entity. The primary identifier is still the entity id. We cannot run any kind of aggregations on the data. However, views do give us the opportunity to query for data based on other attributes than the id — in fact, that's the whole point of creating a view in the first place.

The queries must be defined upfront using basic SQL syntax. We can specify the columns to return in the SELECT clause as well as basic filters using comparison operators on columns and scalar values in the WHERE clause. The indexes for the view are inferred from the queries that it supports.

A query can return multiple values or a stream of values. In its simplest form, to create a view, you only need the gRPC service definition with query annotations.

Views, like actions, are updated asynchronously, hence we are once again dealing with eventual consistency.

If you do need data aggregation, you'll have to create a dedicated aggregation service and send the data to be aggregated between the two services. Let's see what such communication looks like.

Inter-service communication

To communicate between services, which might include both Akka Serverless services and external ones, we can leverage message queues. Currently, that's only limited to Google Pub-Sub as that's where Akka Serverless itself is being deployed. We can publish state changes of a value entity (possibly transformed using an action), or all events emitted by an event sourced entity. We can also subscribe to messages coming in on an Pub-Sub topic.

The Pub/Sub topic has to be created externally in the GCloud console and appropriate access needs to be given to Akka Serverless to consume and/or publish to the topic.

Using asynchronous message queues for inter-service communication isn't just the preferred way of communicating between services, it's the only way. There is no way to call another service synchronously, apart from making an HTTP or gRPC call.

While we do have the fallback method of using gRPC invocations to call other services, and while in general it is preferable to communicate between services asynchronously, in some situations, it would be useful to have an option to synchronously call another service. Use cases might include, for example, checking preconditions when an entity is invoked, such as authenticating a user, verifying a token or role-based authorization.

The topic can be consumed by internal and external services; and the other way round — the data on the topic can be created by an external system.

Case study: user registration

Using the components of Akka Serverless, let's see how we can approach building a user registration system. The source code can be found on GitHub. There are two main requirements that our solution needs to meet:

  1. only a single user with a given email can register
  2. we need to store an audit log of all changes made to a particular user

The first requirement is especially tricky in any eventually-consistent system. How to ensure the uniqueness of data? If there are two concurrent user registration processes, coming in with the same email, one of them has to fail. Since the only islands of full consistency and serialization in Akka Serverless are individual entities, we'll have to create an entity where the id (entity key) is the email in question (we'll gloss over email normalization such as lowercasing, which can be implemented using an action). We'll use an emails value entity for this purpose.

However, a single user might change their email later. Since we want to capture this in our audit log (along with any other changes), we'll need a separate entity, keyed by an artificially generated user id. We'll use a users event-sourced entity for that purpose.

We'll also need to transfer the data from the emails entity to the users one. For this purpose, we'll use a pub-sub topic. We'll also deal with niceties such as sending a welcome email using an action.

To implement the example, we'll use the official Java SDK. It comes with a Maven archetype, which creates a build file with auto-generation of protobuf models, as well as generation of skeletons for some service interfaces.

Incoming user registrations

The entrypoint to our system will be a RegisterUserCommand, which includes an email and the user's password. Note that we need to explicitly state which field in the command identifies the entity (here it's the email) so that Akka Serverless can route the request appropriately:

message RegisterUserCommand {
  string email = 1 [(akkaserverless.field).entity_key = true];
  string password = 2;
}

This command will be handled by the EmailsService, which implements the logic for the value entity and stores the state as EmailsState. Here are the protobuf definitions:

// entity state
package com.softwaremill.test.domain;

option (akkaserverless.file).value_entity = {
  name: "Emails"
  entity_type: "emails"
  state: "EmailsState"
}; 

message EmailsState {
  string user_id = 1;
  string password_salt = 2;
  string password_hash = 3;
}

// entity service
package com.softwaremill.test;

message RegisterUserResult {
  string user_id = 1;
}

service EmailsService {
  option (akkaserverless.service) = {
    type: SERVICE_TYPE_ENTITY
    component: ".domain.Emails" // references the above message
  };

  rpc Register(RegisterUserCommand) returns (RegisterUserResult);
}

The annotations on the Protobuf messages and gRPC service allow auto-generating a skeleton for the value entity. Here's a somewhat abbreviated implementation of the Emails entity register logic:

@ValueEntity(entityType = "emails")
public class Emails extends AbstractEmails {
  private final String email;

  public Emails(@EntityId String email) {
    this.email = email;
  }

  @Override
  public Reply<EmailsApi.RegisterUserResult> register(
    EmailsApi.RegisterUserCommand cmd, 
    CommandContext<EmailsDomain.EmailsState> ctx) {

    var current = ctx.getState();
    if (current.isPresent() && 
        !current.get().getUserId().isEmpty()) {
      throw ctx.fail("Email is already taken");
    } else {
      // generating the artificial user id
      String userId = UUID.randomUUID().toString();

      // hashing the password
      String passwordSalt = ...;
      String passwordHash = ...;

      // emitting the event
      ctx.updateState(EmailsDomain.EmailsState.newBuilder()
        .setUserId(userId)
        .setPasswordSalt(passwordSalt)
        .setPasswordHash(passwordHash)
        .build());

      // returning the generated id
      return Reply.message(
        EmailsApi.RegisterUserResult.newBuilder()
          .setUserId(userId)
          .build());
    }
  }
}

Note how we are reading the current state of the value entity from the call's context and then using the same context to persist the updated state. The result returned to the caller is the id of the new user. When calling through HTTP, this will be serialised as JSON and when calling through gRPC, we'll get back a binary protobuf message.

You'll probably also notice the use of Java annotations, such as @EntityId and @ValueEntity. Coming from frameworks such as Spring or JEE, you might expect classpath scanning or auto-discovery, but that's not what happens. We need to register each entity, action, and view in the generated Main class of our service. The annotations are then used only on the explicitly provided classes to read the required metadata (hence, the use of annotations here is not all that bad). Since value entities are automatically generated, this is already done for us:

akkaServerless
  .registerValueEntity(
    Emails.class,
    EmailsApi.getDescriptor()
      .findServiceByName("EmailsService"),
    EmailsDomain.getDescriptor()
);

We can then package the service as a docker container, publish it to a public or private registry (in the latter case, we'll need to provide Akka Serverless with the appropriate access rights so that it can pull the image), and deploy. When deploying an Akka Serverless service, we can optionally create a publicly available route to the service (which we want in this case). After a couple of seconds, we'll have two instances running our service, and we can verify that it works using e.g. curl:

curl \ 
  -H "Content-Type: application/json" \
  -d '{"email": "test@example.com", "password": "01234"}' \      
  https://(domain)/com.softwaremill.test.EmailsService/Register

This uses the http endpoint that is generated by default but the path can be customised if needed.

But that's not all. As a second step, we want to push the data to the event-sourced entity so that any future changes to the user (with the id that we've just generated) are captured as events. That's why we publish any changes done to the value entity to a Pub/Sub topic. We define an action, which is invoked by the stream of value entity changes. Here's the proto definition:

service EmailsPublishingService {
  rpc PublishEmailAssigned(domain.EmailsState) returns (EmailAssignedMessage) {
    option (akkaserverless.method).eventing = {
      in: {
        value_entity: "emails";
      }
      out: {
        topic: "emails-assigned";
      }
    };
  }
}

message EmailAssignedMessage {
  string user_id = 1;
  string email = 2;
  string password_salt = 3;
  string password_hash = 4;
}

And the Java implementation:

@Action
public class PublishingAction {
    @Handler
    public EmailsPublishing.EmailAssignedMessage publishEmailAssigned(
        EmailsDomain.EmailsState ev, ActionContext ctx) {

        String email = ctx.eventSubject().get();
        return EmailsPublishing.EmailAssignedMessage.newBuilder()
                .setUserId(ev.getUserId())
                .setEmail(email)
                .setPasswordSalt(ev.getPasswordSalt())
                .setPasswordHash(ev.getPasswordHash())
                .build();
    }
}

As you can see, the action performs simple data transformation between messages in the EmailState format to messages in the EmailAssigned format. Finally, we also need to register the action in the Main class:

akkaServerless
  .registerAction(
    PublishingAction.class,
    EmailsPublishing.getDescriptor()
      .findServiceByName("EmailsPublishingService")
  );

Storing user events

The second service that we'll write will be centered around an event-sourced users entity keyed by the artificial user id, which is being generated by the previous step. We'll have a single event for now, but we could easily add other events to support other use cases (such as a UserEmailChanged event):

message UserCreated {
  string email = 1;
  string password_salt = 2;
  string password_hash = 3;
}

We'll also have two commands: one to create a user, and another one to authenticate a user (check the user's password). Here's the service definition:

message CreateUserCommand {
  string user_id = 1 [(akkaserverless.field).entity_key = true];
  string email = 2;
  string password_salt = 3;
  string password_hash = 4;
}

message AuthenticateUserCommand {
  string user_id = 1 [(akkaserverless.field).entity_key = true];
  string password = 2;
}

service UsersService {
  rpc Create(CreateUserCommand) returns (google.protobuf.Empty);
  rpc Authenticate(AuthenticateUserCommand) returns (google.protobuf.Empty);
}

Unlike for value entities, there's no code generation for event-sourced entities, so we'll have to handle this ourselves. The entity class needs to contain:

  • the internal state, as mutable instance variables
  • command handlers that can read but cannot write the state
  • event handlers that can update the state but shouldn't have side effects

The internal state here consists of the user id (the entity key), the email, hashed password, and salt. The create command handler emits a single event, while the authenticate command handler only returns a response. Finally, the userCreated event handler updates the internal state:

@EventSourcedEntity(entityType = "users")
public class UsersEntity {
  private final String userId;
  private String email;
  private byte[] passwordSalt;
  private byte[] passwordHash;

  public UsersEntity(@EntityId String userId) {
    this.userId = userId;
  }

  @CommandHandler
  public Empty create(UsersApi.CreateUserCommand cmd, 
    CommandContext ctx) {

    // emitting the event
    ctx.emit(UsersDomain.UserCreated.newBuilder()
      .setEmail(cmd.getEmail())
      .setPasswordSalt(cmd.getPasswordSalt())
      .setPasswordHash(cmd.getPasswordHash())
      .build());
    return Empty.getDefaultInstance();
  }

  @EventHandler
  public void userCreated(UsersDomain.UserCreated ev) {
    email = ev.getEmail();
    var decoder = Base64.getDecoder();
    passwordSalt = decoder.decode(ev.getPasswordSalt());
    passwordHash = decoder.decode(ev.getPasswordHash());
  }

  @CommandHandler
  public Empty authenticate(
    UsersApi.AuthenticateUserCommand cmd, CommandContext ctx) {

    // hashing the incoming password
    byte[] incomingPasswordHash = ... // using cmd.getPassword()

    if (MessageDigest.isEqual(passwordHash, incomingPasswordHash)) {
      return Empty.getDefaultInstance();
    } else {
      throw ctx.fail("Incorrect password");
    }
  }
}

Such design of event-sourced entities requires discipline when it comes to dealing with the internal state so that the command handlers don't ever mutate the state. The API in the stand-alone Akka Persistence is better in this regard as we need to provide two functions (simplified): (State, Command) => List[Event] and (State, Event) => State. This removes mutable state and any possibility of state misuse. Hopefully, future versions of the SDK will give an opportunity to use the APIs in this fashion.

We also need an action that will invoke the create user command whenever there's an incoming message on the Pub/Sub topic (which is populated by the emails service). The action will adjust the format of the data and forward the call. First, the protobuf definition, with the same EmailAssignedMessage as before, as it is the structure of the data on the topic:

service UsersSubscribeService {
  rpc WhenEmailAssigned(EmailAssignedMessage) returns (google.protobuf.Empty) {
    option (akkaserverless.method).eventing.in = {
      topic: "emails-assigned"
    };
  }
}

message EmailAssignedMessage {
  string user_id = 1;
  string email = 2;
  string password_salt = 3;
  string password_hash = 4;
}

And the service implementation:

@Action
public class SubscribeAction {
  private final ServiceCallRef<UsersApi.CreateUserCommand> 
    createUserCommandRef;

  public SubscribeAction(Context ctx) {
    createUserCommandRef = ctx.serviceCallFactory()
      .lookup("com.softwaremill.test.UsersService", 
        "Create", UsersApi.CreateUserCommand.class);
    }

  @Handler
  public Reply<Empty> whenEmailAssigned(
    UsersSubscribe.EmailAssignedMessage ev, ActionContext ctx) {

    return Reply.forward(createUserCommandRef.createCall(
      UsersApi.CreateUserCommand.newBuilder()
        .setUserId(ev.getUserId())
        .setEmail(ev.getEmail())
        .setPasswordSalt(ev.getPasswordSalt())
        .setPasswordHash(ev.getPasswordHash())
        .build()
      ));
  }
}

When the action is constructed, we cache the reference to the service call and upon an incoming message, forward the call.

Unfortunately, we have to rely on string-based identifiers of the UsersService component to look up the appropriate gRPC service call. It would be much better to somehow reference this in code, especially since we are within a single service.

Finally, we can implement the action that will send welcome emails. The trigger for this action are events of the users entity. Once again, we use the (akkaserverless.method).eventing.in annotation to specify the stream of data that is triggering a given component:

service UsersWelcomeEmailService {
  rpc Send(domain.UserCreated) returns (google.protobuf.Empty) {
    option (akkaserverless.method).eventing.in = {
      event_sourced_entity: "users"
    };
  }
}

The implementation of the action would have to somehow schedule an email for delivery. Very often, this means calling some API over HTTP. Akka Serverless doesn't provide any facilities for this, in fact the IoT example from the official docs simply uses an HttpsURLConnection.

Depending on the volume of HTTP calls, this might be a performance bottleneck. It's not entirely clear how many instances of an action are created per service instance and what would be the guidelines for creating stateful resources, such as proper HTTP clients that would do thread and connection pooling.

This brings us to another point: all of the component implementations in the Java SDK are synchronous, in a thread-blocking sense. While this makes the SDK Loom-ready, it's a bit surprising that e.g. the actions don't allow returning even a CompletableFuture<Reply<T>>, especially that this comes from Lightbend, the “non-blocking” company. But that's a shortcoming of the SDK, not of the platform itself. I'm sure that the (hopefully!) upcoming Scala SDK will offer non-blocking, asynchronous APIs.

We've glossed over registering the components in the Main function but this is done similarly as in the previous example. Please refer to the source code for full details. The good news is that this is all explicitly done, there is no auto-discovery magic.

This concludes our example. We can deploy both services after publishing their images to a Docker repository. In my case, I've opted to go with a private Google Cloud Container Registry, as it integrates nicely with Akka Serverless. Everything can be done through the command line, using the akkasls utility.

Testing

While we haven't included this in the example or discussed at length, Akka Serverless gives you a couple utilities that help with testing. You can run a single service locally, communicating with it using an Akka Serverless proxy, which can be started with an auto-generated docker compose configuration. This is great for development and making sure you've got the basics right.

Such single-service integration tests can be automated by using the provided testkit which in turn relies on testcontainers. In the tests, we can use the automatically-generated gRPC clients to interact with our service.

Going forward, it would also be very useful to run multiple services locally, both manually and automatically. The principles of microservices do state that they should be developed and tested in isolation. However, given the very fine grained nature of the services, and that a service houses exactly one entity, it's quite probable that multiple Akka Serverless services can be considered a single “microservice”. You could also easily imagine developing a normal application using Akka Serverless services where you just want to run all of them at once. For such “stateful serverless monolith”, the testing utilities will have to be expanded.

What's missing and can be improved

Since we are dealing with a beta version of Akka Serverless 1.0, it's natural that some things are missing. These are also natural candidates for future development of the platform. For example, while you can specify environmental variables when deploying a service, there's no secret management of any kind. Another useful feature would be supporting development/staging/production environments so that services can be gradually promoted as development continues.

Security is another huge area for development. All created routes are currently exposed publicly — there's no way to restrict who can call a given service. This can be of course checked as a precondition in a command handler or action, but given limited capabilities of synchronous inter-service communication, I'm not sure how practical this would be. Hence authentication and authorization, or at least the ability to deploy the service in some sort of protected environment (such as a VPC) would be great. I think currently, you might end up using Akka Serverless services behind an API gateway or such, which would handle some of the security concerns.

The documentation is generally well written and comprehensive, however I was missing some information on the guarantees given by the platform. A dedicated section summarising this information would be great. This also applies to adding new components to an existing service, and how they are primed with data. For example, from what I've observed, when a new action triggered by events of an event-sourced entity is added, it will initially receive all past events. Which is sensible, but could use a mention in the docs. On the other hand, when we add a new consumer to a Pub/Sub topic (e.g. an action), it will only receive new messages, it won't consume any of the previously published ones. Again, probably a sensible choice, but would be good to state this explicitly.

Speaking of implementing new actions and entities. This is always a two- or three-step process — we need to define the protobuf model, the gRPC service, run code generation, and finally implement the logic. And you'll end up creating a lot of entities, actions, and topics as the programming model strongly encourages you to create microservices rather than midi- or macro- ones. Consider the task of updating multiple entities with a single command — the only reliable way of performing this task is creating an action that publishes to a topic and having multiple services subscribe to the topic, transforming the data through actions and updating the entities appropriately.

I understand the universality of protobuf definitions, however, you often don't need that degree of flexibility and versatility. Moreover, there's a significant number of string-based references, which is just asking for stupid bugs. I'm probably biased as I prefer to express as much as possible in code, preferably without repeating any information but I would for sure welcome some faster and more terse way of defining a service.

Finally, it's surprisingly easy to delete a service with all its data! I know this is a beta, but still, a dialog box where you have to write “I know what I'm doing” could be useful :).

Summing up

Akka Serverless gives you a couple of basic building blocks: value entities, event-sourced entities, actions, views, effects and pub/sub topics, using which you can build scalable, resilient, data-centric applications. You don't have to worry about routing, sharding, caching, clustering, and to some degree, also about failure handling. Many operational concerns are taken over by the Akka Serverless runtime.

However, the building blocks we have at our disposal are somewhat constrained. There's a class of problems where Akka Serverless shines, but there's a class of problems where using the offered data and computation models will be more of a hindrance than help.

While a complete application probably couldn't be created using only Akka Serverless (yet), it's a great complement to our toolbox when creating a microservice-based application. If you're targeting Google's cloud, you will be able to leverage Akka Serverless soon, when it's out of beta (which should happen this year).

The Akka team has done great simplifying the implementation of services which co-locate data and code. Not only that: if you'd like to leverage the numerous modelling and performance benefits of event sourcing, for example, Akka Serverless might be the tool you've been looking for. The offering will in many cases remove the need for a custom Akka Cluster setup. You can still of course use the open-source, self-hosted version, but if all you need are persistent entities and event-sourcing, for sure using Akka Serverless will be a more cost effective solution.

Blog Comments powered by Disqus.