envelopmenuskypeburger-menulink-externalfacebooktwitterlinkedin2crossgithub-minilinkedin-minitwitter-miniarrow_rightarrow_leftphonegithubphone-receiverstack-overflow

5 New features in Akka (Streams) 2.5.4 you may have missed

UPDATED 2017-08-28: Corrections re AffinityPool in response to Zahari Dichev’s remarks.

Introduction

Akka, due to being a stable platform, does not enjoy as much hype as it originally did. So a release of a patch version might slip the attention of developers.

However, the important thing to note is that Akka does not strictly follow semantic versioning, so new patch versions (e.g. from 2.5.0 to 2.5.1) may include new functionality.

This is particularly true in the case of version 2.5.4 released recently. Despite being labeled as a "security patch", it adds a considerable number of nice-to-haves - which I will highlight now.

5. A MergePrioritized Akka Streams Stage

As the description of the relevant pull request says, a problem with the usually-used stage for prioritized queues is that it’s strictly deterministic - if your primary input produces too many elements, items from the other inlets will never join the merged stream. Another issue is the relative lack of configurability - you have a primary source, secondary sources, and that’s it.

In contrast, MergePrioritized not only allows to assign relative priorities, it selects elements pseudo-randomly, preventing starvation.

There’s also a bonus feature, due to the implementation of MergePreferred - it has a non-standard structure with a special .preferred inlet, and by that it does not lend itself well to usage via Source.combine [1]. In contrast, MergePrioritized has all its inlets in the standard in sequence, allowing e.g.:

val PriorityStep = 100

Source.combine(
  Source(0 to 4),
  Source(5 to 9)
)(numInputs =>
  MergePrioritized(Range(1, numInputs*PriorityStep, PriorityStep))) (1)
  .runWith(Sink.foreach(print)) (2)
  1. Assigns priorities [1, 101, …​].

  2. Usually will print out 0567891234.

4. Backoff Stages for Akka Streams

A while ago, I had the opportunity to work a bit with Monix. One of the things I found to be missing in Akka Streams, comparatively, was an easy way to add backoff to element sources. This is supremely helpful when working with e.g. glitchy HTTP APIs, as it prevents unnecessary consumption of thread pool time.

Luckily, 2.5.4 now comes with a RestartSource. Not only that, but there’s also a RestartFlow and a RestartSink!

This new feature is well-documented, so I’ll surmise with a link to it.

3. Support for timers in actors

Sending a message to itself in regular intervals is a common actor pattern[2]. However, the usual way to do that, through the system’s Scheduler, always looked somewhat clunky.

Well, now the akka.actor.Timers API facilitates the implementation of this pattern in a more centralized way.

The major advantage is that thus-created timers follow the lifecycle of the relevant actor, thus preventing abstraction leaks added by invoking the system-level scheduler.

2. PartitionHub for Akka Streams

This new feature is an extension of the Merge/BroadcastHub concept for dynamic stream instantiation. It allows, somewhat analogously to MergePrioritized, for more control over consumer selection in comparison to a "plain" BroadcastHub.

As with 4., the documentation is pretty exhaustive here.

1. Affinity-aware actor pool

Finally, the most interesting feature in my opinion is the new dispatcher executor.

The important thing to note here - it’s a different beast than the PinnedDispatcher you may be familiar with. This is actually an extension of the concept.

To be specific, the new AffinityPool, settable via executor = "affinity-pool-executor" in the dispatcher section, merely attempts to "pin" each actor to a thread - the threads and actors are not bound pairwise permanently. While it sounds weaker than the original PinnedDispatcher, such a pinning mechanism is actually more powerful, as it prevents an uncontrolled growth of resource usage.

The AffinityPool itself does not currently attempt to pin its threads to CPU cores, leaving this task to the underlying operating system (since no meaningful gains were found during testing for the current release). However, even the current setup still lends towards reduced context switching - and thus increased efficiency - in many scenarios.

For more information, see:

And that’s all folks!


1. Meaning you’re effectively forced to always use the GraphDSL wih it.
2. One use case would be creating a custom backoff stage…​

scala times
Interested in Scala news?

Subscribe to Scala Times Newspaper delivered weekly by SoftwareMill straight to your inbox.