Contents

Contents

TigerBeetle vs PostgreSQL Performance: Benchmark Setup, Local Tests

TigerBeetle vs PostgreSQL Performance: Benchmark Setup, Local Tests featured image

Some time ago, we covered what's interesting in TigerBeetle: a fixed-schema, performance-oriented, replicated, highly available financial database. TigerBeetle makes a number of interesting design choices that all aim to develop a system that offers "1000x performance" and can power the next generation of financial workloads.

Let's put the design and these claims to the test! We'll benchmark TigerBeetle against PostgreSQL, the "go-to" relational database. In this article, we'll cover the test design and provide the results of initial, single-node, local tests. In the next installment, we'll test a full cluster with data replication, running in isolation on dedicated servers.

As usual, the benchmarks' code is available on GitHub if you'd prefer to explore them that way.

Designing the test: PostgreSQL schema

TigerBeetle has a fixed schema, supporting (only) double-entry bookkeeping. There are three main entities: ledgers, accounts, and transfers. A transfer always involves two accounts: one is credited, while the other is debited. Creating a transfer is the main database operation, and it will also be the one we use in this benchmark.

To provide a fair comparison, in PostgreSQL, we'll implement a schema similar to the TigerBeetle one, though a bit simplified. It only includes the fields that will be used in the tests (skipping ledgers, user data, timeout, codes, flags & linked transfers):

CREATE TABLE IF NOT EXISTS accounts (
    id BIGINT PRIMARY KEY,
    balance BIGINT NOT NULL DEFAULT 0,
    CONSTRAINT balance_non_negative CHECK (balance >= 0)
);

CREATE TABLE IF NOT EXISTS transfers (
    id BIGSERIAL PRIMARY KEY,
    source_id BIGINT NOT NULL REFERENCES accounts(id),
    dest_id BIGINT NOT NULL REFERENCES accounts(id),
    amount BIGINT NOT NULL,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    CONSTRAINT amount_positive CHECK (amount > 0),
    CONSTRAINT different_accounts CHECK (source_id != dest_id)
);

CREATE INDEX IF NOT EXISTS idx_transfers_source ON transfers(source_id);
CREATE INDEX IF NOT EXISTS idx_transfers_dest ON transfers(dest_id);
CREATE INDEX IF NOT EXISTS idx_transfers_created_at ON transfers(created_at);

Note that we always require that the balance is non-negative; this is optional in TigerBeetle, but the transfers we'll create will have that check turned on (the debits_must_not_exceed_credits flag is set during account creation).

Transfers in PostgreSQL

Once we have the schema, we can perform some transfers! In TigerBeetle, we just need to send a create_transfer request, but in PostgreSQL, we must implement the transfer logic in SQL.

There are a couple of strategies we can take, with different uses of PostgreSQL's locking, to ensure correctness; whichever strategy we choose, we must ensure that at no point is money fabricated. Each transfer in PostgreSQL consists of three operations:

  • Decreasing the balance on the source account
  • Increasing the balance on the destination account
  • Inserting a transfer to the transfers table

We'll implement transfers as a stored procedure to reduce network overhead & round-trips. That way, a transfer will be a single network call to the database, performing the three steps on the database side.

The first approach is to use explicit locking. First, we do a SELECT … FOR UPDATE to lock and read the balances on the source & destination accounts (in proper order, to avoid deadlocks), then we UPDATE the accounts with the new values, and finally INSERT a new transfer. This is implemented as a transfer function.

The second approach is to use implicit lockingp with UPDATE … SET … WHERE and check how many rows were affected (that is, whether the transfer is successful). If so, we INSERT a new transfer. This is implemented in the transfer_atomic function.

Of course, in both cases, the stored procedure is run within a transaction block. That way, we ensure that either a full transfer is recorded or that a transfer is rejected (never partially). We're using the relatively weak READ COMMITTED transaction isolation level, which is sufficient for this workload. (This is different in general from TigerBeetle's strict serializability, but we don't need such high guarantees for the PostgreSQL implementation.)

Batching transfers in PostgreSQL

The above two strategies allow implementing load tests, where, both in TigerBeetle and PostgreSQL, we perform a similar operation when creating a new transfer from the client: just call the appropriate function.

However, comparing the two databases using such a straightforward approach might be misleading, as TigerBeetle clients work differently from SQL ones.

Typically, when using a SQL database, you have a pool of pre-established connections. When a client wants to send a request, it either fetches an open connection (if one is available) or waits until one becomes available, performs the operation, and returns the connection to the pool. That's also what a load test with many concurrently running clients, trying to call the transfer stored procedures, might do.

However, TigerBeetle takes a different approach. For each client, there's exactly one connection to the database. There is at most one in-flight request at any time. When a request is sent, any subsequent requests are collected into a batch; once the previous request completes, the batch is sent. Such aggressive batching is one of Tigerbeetle's strategies to achieve high performance, and is done automatically by the client implementation.

But! We can implement something similar in PostgreSQL. That is, we can create a batched client, which maintains a single connection to the database. While the previous request is serviced, the next transfers are collected into a batch. Finally, we call a stored procedure which handles the requests as a batch - using a single network call, for many transfers.

Note that this technique is specialized for the transfer use-case - it wouldn't work in general, as in general, you can perform arbitrary queries. But here, we can create a batch_transfers function which takes three arrays of source, destination accounts, and the amounts. Before each transfer, a savepoint is created, allowing the "big" batch transaction to partially roll back if the transfer isn't successful.

The client code

Both TigerBeetle and PostgreSQL clients are implemented using asynchronous Rust with the Tokio runtime. This allows hundreds or thousands of executors to run concurrently using a small thread pool.

The test clients can run in two modes: max-throughput and fixed-rate. The former is used to determine the maximum number of transfers per second a database can handle at a given client concurrency level (which determines how many executors attempt to create transfers in a loop). The latter is used to determine the transfer latency at a given level of transfers per second and client concurrency.

For our initial single-node local tests, we'll focus solely on max-throughput testing.

Each client works in two phases: warmup and measurement, with configurable durations. Each measurement is reported to Prometheus via an OpenTelemetry metrics collector.

That's where the role of the client code ends: in max-throughput mode, its goal is to hammer the database with randomly generated transfers between randomly chosen accounts (using the Zipfian distribution, which reflects the fact that usually there's a small number of "hot" accounts which receive a disproportionate share of transfers) for the duration of the test.

The client code has three PostgreSQL executors (explicit locking, implicit locking, batched) and one TigerBeetle executor.

The coordinator

The coordinator code handles spawning the client(s), performing multiple test runs (wiping the database in-between runs), and collecting and reporting the data.

When running locally, Grafana, Prometheus & the OpenTelemetry collector are always set up using Docker. When running on macOS, TigerBeetle must be run directly on the host because it uses io_uring, which is unavailable in Docker. PostgreSQL is always run through a Docker container.

Local test results

Finally, let's look at the local test results! The tests have been running on an Apple MacBook Pro with the M1 Max chip and 64GB of RAM.

As a first phase, we've been running a number of "quick" tests, with 3 test iterations, each consisting of a 30-second warmup and a 60-second measurement phase. These quick tests were used to determine the optimal concurrency setting for each tested transfer executor.

The best-performing concurrency settings for each executor are:

ExecutorConcurrencyConnection pool size
Tigerbeetle24 576
Postgres explicit locking6416
Postgres implicit locking6416
Postgres batched384

Then, for the concurrency setting that yielded the best results (different for each executor), we ran a longer test, with 3 test iterations, a 2-minute warmup, and a 5-minute measurement. The results are:

2026-01-19-normal-comparison

TigerBeetle is 2.8 times faster than the best competing PostgreSQL implementation, achieving 42k TPS, compared to Postgres-batched at 15k TPS, and 6.6x faster than the Postgres-explicit-locking approach. It's a significant difference (although not 1000x)!

Note that TigerBeetle achieves this throughput with a much higher concurrency setting, which ensures that batches are larger. With lower concurrency settings, the throughput is lower as well:

2026-01-18-tigerbeetle-quick

This plots the results of the "quick" tests, where TigerBeetle achieved even higher throughput (60k TPS) than in the long-running tests.

Test limitations

Keep in mind that we are testing a specific usage scenario - where there are many concurrently running clients (corresponding, e.g., to incoming HTTP requests), and where each client creates one transfer. The performance results might be totally different if batching occurred at a different level and if the transfers arrived pre-batched (e.g., up to the 8192 batch size, which is the maximum for TigerBeetle).

Hence, the above test might give you some idea of how TigerBeetle performs when compared to PostgreSQL, but before making any decisions, make sure to run benchmarks on your workload - they might turn out rather differently!

Next steps

Local, single-node tests don't tell the whole story - rather, they just give an indication as to how performance might turn out in a "real" production setup.

That's why in the next installment, we'll take a look at the performance of replicated clusters - the way TigerBeetle is intended to be used - running on dedicated servers. Stay tuned!

Blog Comments powered by Disqus.