Announcing jox: Fast and Scalable Channels in Java

Adam Warski

19 Dec 2023.5 minutes read

Announcing jox: Fast and Scalable Channels in Java webp image

The jox library implements an efficient Channel data structure in Java, which is designed to be used with virtual threads (introduced in Java 21). Specifically, channels rely on blocking operations to perform the necessary synchronisation between interested parties.

The implementation follows the "Fast and Scalable Channels in Kotlin Coroutines" paper, and the Kotlin implementation. There, coroutines and suspendable functions are used (which are a Kotlin-specific feature). The rationale and use cases for coroutines are similar to those for which virtual threads were designed. Hence, the algorithm might work well using both runtimes!

Similarly to a queue, data can be sent to a channel and received from a channel. Moreover, channels can be closed because the source is done with sending values or because there's an error while the sink processes the received values.

Currently, a preview 0.0.2 version is available, with the above basic functionality implemented. However, that's just the start—look at the closing remarks for our plans!

Channels in action

Here's a short demo of channels in action; we send two values and then close the channel by stating that no more values will be produced. The receiver works in lockstep with the sender, receiving subsequent values and finally, a value representing the fact that the channel is done:

import jox.Channel;

class Demo {
    public static void main(String[] args) throws InterruptedException {
        // creates a rendezvous channel (buffer of size 0—a sender & 
        // receiver must meet to pass a value)
        var ch = new Channel<Integer>();

        Thread.ofVirtual().start(() -> {
            try {
                // send() will block, until there's a matching receive()
                System.out.println("Sent 1");
                System.out.println("Sent 2");
            } catch (InterruptedException e) {
                throw new RuntimeException(e);

        System.out.println("Received: " + ch.receive());
        System.out.println("Received: " + ch.receive());
        // we're using the "safe" variant which returns a value when the
        // channel is closed, instead of throwing a ChannelDoneException
        System.out.println("Received: " + ch.receiveSafe());

The above example uses a rendezvous channel with no buffer, and a sender & receiver must synchronise for the value to be sent. Buffered channels can also be created by supplying a constructor argument (e.g. new Channel<String>(10)).

Channels are designed to be used concurrently by many producers and many consumers.


The focal point of the implementation is performance. The bottom line is that Kotlin with coroutines is still faster, but Java with virtual threads is not far behind!

The starting point for the performance work was the investigation on what the performance limits of virtual threads might be. This led to a couple of interesting conclusions, such as the most efficient way to wait for a condition when two threads of execution are actively working or how the implementations of the asynchronous runtimes in Kotlin and in Java's virtual threads might be different.

To keep an eye on performance, jox includes a suite of jmh benchmarks, run after each commit / PR merge, thus allowing to spot any performance regressions quickly.

We're currently running tests with two concurrently running (virtual) threads of execution, one sending values and the other receiving values on a single channel. The test runs include rendezvous and buffered channels with buffers of sizes 1, 10 and 100. For comparison, we also run the same tests using Java's built-in data structures: SynchronousQueue, ArrayBlockingQueue and Exchanger, as well as Kotlin's Channels (which uses coroutines and Dispatchers.Default).

And here are the numbers! First, jox's rendezvous Channels vs Kotlin's rendezvous channels (done on a M1 Max MacBook Pro):


In Kotlin, a single send/receive operation takes about 108ns, while in jox, 176ns. However (what's not visible on the graph), the error in the case of Kotlin is ±0.5 ns/op, while in the case of jox ±14.9 ns/op. This indicates that jox's performance varies quite significantly between runs. While there's room for improvement comparing to Kotlin, I think that jox's results are still very good!

The jox benchmarks are run twice. First directly, where jmh manages each invocation of send or receive. Second iteratively, where each jmh-managed method invocation starts two virtual threads and runs 1 million iterations of send-receive operations. While the first approach is the how jmh should be used, the second mirrors how Kotlin is benchmarked, as jmh doesn't support suspendable functions (since it's a Java, not a Kotlin library).

Next comparison covers jox's rendezvous Channels ("direct" run) in the context of built-in Java constructs (keep in mind that Exchanger offers different functionality - exchanging values between two threads, but is interesting to compare from a performance point of view):


Again, it is not visible on the graph, but the errors are very high, especially for Exchanger (±152 ns/op). The variability is probably due to different scheduling decisions made by the virtual threads executor as to how many platform threads to use, on which virtual threads are scheduled (parallelism).

And finally, buffered channels in jox ("iterative" run) and Kotlin, compared to Java:


As you can see, especially in the buffered case, Kotlin comes out much faster. I suspect that that's due to how the runtime schedules virtual threads (in Java) and coroutines (in Kotlin). Kotlin makes different decisions on when to use parallelism (multiple platform threads) and how many operations a thread/coroutine performs before a context switch: clearly, this makes a difference, at least in this test.

Finally, remember that this is only one benchmark, which might not (and probably doesn't) represent (future) real-world usage of jox Channels and Kotlin Channels! The plan is to add more scenarios, e.g., passing values through a chain of channels, to compare the implementations and the runtimes in more detail.

Further work

Next, the plan is to work on Go-like selects which allow to select exactly one channel from which to receive or send (depending on availability). Moreover, there might still be some possibilities of optimising the code that is already there: jmh and async-profiler are waiting. Once this is ready, jox will become the default channel implementation in ox, our library for safe concurrency and resiliency in Scala, allowing for more real-world tests.

If you'd like to get involved or leave feedback, please open an issue or a discussion on our community forum. jox is available on GitHub under the Apache2 licence.

Blog Comments powered by Disqus.