Welcome back to this series about logical replication from Postgres 16 stand-by servers, in which we’ll discuss how to use this feature with Debezium—a popular open-source platform for Change Data Capture (CDC) for a wide range of databases—as well as how to manage logical replication in case of failover scenarios, i.e. a situation where your primary Postgres server becomes unavailable and a stand-by server needs to take over.
If you want to learn more about logical replication in general, and why and how to use it with stand-by servers in Postgres 16, then I suggest heading over to part one of this blog series before continuing here.
Stand-By Logical Replication With Debezium
The beauty of Postgres logical replication is that not only other Postgres instances can serve as the consumer of a replication stream, but that also other clients can subscribe to a replication slot for ingesting realtime change event feeds from Postgres. Debezium, offering CDC support for many databases, including Postgres, makes use of that for exposing change event streams via Apache Kafka, but also—via its Debezium Server component—to other kinds of messaging and streaming platforms such as AWS Kinesis, Apache Pulsar, NATS, and others. So let’s quickly test how to stream changes from a Postgres stand-by with Debezium.
Note that we’ll need to use the latest Debezium release—2.5.0.Beta 1, released last week—in order to stream changes from a stand-by server. When I first had tested this, things wouldn’t quite work, as the connector made use of the function <span class="inline-code">pg_current_wal_lsn()</span> in order to obtain the current WAL position. This is only available on primary servers, though. So I took the opportunity to make my first little Debezium contribution for quite a while, changing it to invoke <span class="inline-code">pg_last_wal_receive_lsn()</span> instead when connecting against a stand-by. Thanks a lot to the team for the quick merge and inclusion in the Beta1 release!
As a playground for this experiment, I’ve created a simple Docker Compose file which launches Kafka, and Kafka Connect as the runtime environment for Debezium. Here’s an overview of all the involved components:
Fun fact: this uses Kafka in KRaft mode, i.e. no ZooKeeper process is needed. Good times! If you want to follow along, make sure you have Docker installed and you have a Postgres primary and stand-by node set up on Amazon RDS as described in part one. Then clone the decodable/examples repository and launch the demo environment like so:
In order to use Debezium with Postgres on RDS, it is recommended to use the <span class="inline-code">pgoutput</span> logical decoding plug-in. It is the standard decoding plug-in, also used for logical replication to other Postgres instances. This plug-in requires a publication to be set up, which configures which kinds of changes should be published for which tables. Usually, Debezium will set up the publication—similar to the logical replication slot—automatically. Unfortunately, this is not supported when ingesting changes from a stand-by server, as publications (unlike replication slots) can only be created on a primary server. Debezium doesn’t know about the primary, so you’ll need to create that publication manually before setting up the connector:
Having to create publications for stand-by replication slots on the primary seems somewhat inconsistent and it’s also not ideal in terms of operations, but there may be a good reason for that requirement.
<div class="side-note">Note that instead of the <span class="inline-code">ALL TABLES</span> publication, you could also narrow this further down and only expose the changes for specific tables, omit certain columns (e.g. with PII data) or rows (e.g. logically deleted rows), and more. See the docs for the <span class="inline-code">CREATE PUBLICATION</span> command for more details.</div>
Let’s take a look at the connector configuration then. This is done via a JSON-based configuration file looking like this:
Adjust database host, user name, password, and database name as needed when applying this to your own environment. To apply this configuration, the REST API of Kafka Connect can be invoked. But if you’re like me and tend to forget all the exact endpoint URLs, then take a look at kcctl 🧸 (yes, the teddy bear emoji is part of the name), which I am going to use in the following. It is a command line client for Kafka Connect, which makes it very easy to create connectors, restart and stop them, etc. Following the semantics of kubectl, a configuration file is applied like this:
Let’s take a look at the connector and its status:
Having confirmed that the connector is running, let’s do a quick update in the primary database and examine the corresponding change events in Kafka, ingested from the stand-by instance:
These are the snapshot events emitted by the connector when starting up. Let’s do an update to one of the records on the primary:
And shortly thereafter, the corresponding change event should show up in the Kafka topic:
At this point, you could hook up that change event stream with Apache Flink, or the Decodable Kafka source connector for feeding it into a real-time stream processing pipeline, but I’ll leave that for another day 🙂.
Towards Fail-Over Slots
Postgres’ support for logical replication has been built out significantly over the last few years. One thing still is missing, though: failover slots. Logical replication slots on the primary are not propagated to stand-bys. This means that when the primary unexpectedly goes down, any slots must be recreated on the new primary after promotion. Unfortunately, this can cause gaps in the change event stream, as any data change occurring before the new slot has been created would be missed. Clients would be forced to backfill the entire data set (i.e. take a snapshot in Debezium terminology) to be sure that no data is missing.
Discussions around adding support for fail-over slots go back to Postgres versions as old as 9.6. More recently, Patroni added their own solution to the problem, and EnterpriseDB released pg_failover_slots, a Postgres extension for slot failover. It remains to be seen when Postgres itself adds this feature (as hinted at in this presentation, it may happen with Postgres 17). Until then, in cases where the pg_failover_slots extension isn’t available—such as on Amazon RDS—logical replication slots on stand-bys let you build your own version of failover slots. The idea is to create two corresponding slots on primary and stand-by, and use the <span class="inline-code">pg_replication_slot_advance()</span> function (added in Postgres 11) to keep the two in sync. The replication consumer would connect to the slot on the primary at first. After a fail-over, when the stand-by server has been promoted to primary, it would reconnect to that slot.
For this to work, it is critical to periodically move the stand-by slot forward by calling <span class="inline-code">pg_replication_slot_advance()</span> with the confirmed flush LSN from the primary, i.e. the latest position in the WAL which has been processed and acknowledged by the consumer of the primary slot. Otherwise, the stand-by slot would retain larger and larger amounts of WAL, also the consumer would receive lots of duplicated events after a fail-over.
This could for instance be implemented using a cron job, or, when running on AWS, with a scheduled Lambda function. This job would periodically retrieve the confirmed flush LSN for the slot on the primary via the <span class="inline-code">pg_replication_slots</span> view:
The slot on the stand-by would then be advanced to that LSN:
How often you should advance the stand-by slot depends on the amount of duplication you are willing to accept after a failover: the closer the stand-by slot follows the primary slot, the fewer duplicates there will be when switching from one slot to the other. Note that the stand-by slot must never be advanced beyond the confirmed LSN of the primary slot. Otherwise, events would be lost when reading from the stand-by slot after a failover. Specifically, when setting up the stand-by slot, it will in all likelihood be on a newer LSN than what has been confirmed by the primary and it is vital to synchronize the two slots at first. To do so, wait for the next LSN to be confirmed by the primary slot, make sure this LSN has been replicated to the stand-by, and then advance the stand-by slot to that LSN.
Logical replication from Postgres stand-by servers has been a long awaited functionality, and it finally shipped with Postgres 16. Not only does it allow you to build chains of Postgres replicas (one stand-by server subscribing to another), but also non-Postgres clients, such as Debezium are not limited any longer to solely being able to connect to primary Postgres instances. This can be very useful for the purposes of load distribution or in situations where you prefer a CDC tool not to connect directly to your primary database.
The last missing piece in the puzzle here is full support for fail-over slots, for which you still need either a separate extension (pg_failover_slots) or implement your own approach by manually keeping two slots on primary and stand-by in sync. It would be great to see official support for this in a future Postgres release.
Finally, if you’d like learn more about logical replication from stand-bys, check out these posts from some fine folks in the Postgres community:
- Postgres 16 highlight: Logical decoding on standby, by Bertrand Drouvot
- Logical Replication on Standbys in Postgres 16, by Roberto Mello
- Postgres 16: The exciting and the unnoticed, by Samay Sharma
- PostgreSQL Logical Replication: Advantages, EDB's Contributions and PG 16 Enhancements, by Shaun Thomas
Many thanks to Bertrand Drouvot, Robert Metzger, Robin Moffatt, Gwen Shapira, and Sharon Xie for their feedback while writing this post.