Data To Go: Postgres Logical Replication
Logical replication has been around since Postgres 10 and it's just one of the most useful things out there. Need to get your data to a business analytics engine? Want to create an ETL pipeline from Postgres to your data warehouse? Want to test a new version of Postgres locally with your data? Want to consolidate a few databases into one? Logical replication can do all those things!
It is a replication tool, which means it will copy some or all of your data to a new location and continue updating data from the primary source. This allows you to have accurate copies of your data in different locations in perpetuity.
There are two types of replication in Postgres, so just at a super high level some notes for you:
Streaming replication is most often used for making an exact copy of a primary database for redundancy or read scaling. Streaming replication is also sometimes called physical replication and is always the entire database, not a subset. Streaming replication only functions with the same major version of Postgres.
Logical replication is known for being good at getting pieces of data moved from place to place, not necessarily the entire database kit and caboodle. Logical replication is generally seen as being better for moving data to a reporting server, using data elsewhere, and not used in the high availability / failover scenarios.
Queue handy reference table for you visual learners:
The architecture of logical replication is super simple. The primary data source is the publisher and the database receiving data is the subscriber. In the initial load, all data is copied from the publisher to the subscriber. Following the initial data copy, any transactions made on the publisher side are sent to the subscriber.
I’m going to give a quick sample setup of logical replication so you can review the steps to enable this on your own system.
Publisher Settings
To set up logical replication you’ll need to change
wal_level = logical
And review the
max_wal_senders = 10
max_replication_slots = 10
To make sure you have senders and slots for logical replication.
(By default, Crunchy Bridge uses those default settings above. If you’re self-hosting, these settings will require a restart to put into effect when changed.)
Next make sure your subscriber can connect to the publication. A role with the REPLICATION
property must exist for the subscribers to connect with.
CREATE ROLE elizabeth WITH REPLICATION LOGIN PASSWORD 'my_password';
And also, make sure your subscriber machine is allowed into your firewall.
Creating a Publication
Ok, let’s say we have a really simple database and it has a user_id
table and a forum_posts
table we want to send out, each with primary keys.
CREATE PUBLICATION user_pub FOR TABLE user_id, forum_posts;
Or if you want to just have something that puts all data in, but never deletes a row, that would be:
CREATE PUBLICATION user_pub FOR TABLE user_id, forum_posts WITH (publish = 'insert,update');
You can use the publish
settings to decide what updates, inserts, or deletes you want to send out.
And we’ll also make sure our replication user can see everything.
GRANT SELECT ON TABLE user_id TO elizabeth;
GRANT SELECT ON TABLE forum_posts TO elizabeth;
Subscriber Settings
On the subscriber end, you’ll create a subscription to the publication data with a unique name, database connection string, and username/pw, and publication name.
CREATE SUBSCRIPTION user_login_sub CONNECTION 'dbname=training host=127.0.0.1 user=elizabeth password=my_password' PUBLICATION user_pub;
And that’s it!
If you mess up or need to change something, you can DROP
your logical replication setup.
First, DROP SUBSCRIPTION user_login_sub;
on the subscribing side. Then DROP PUBLICATION user_pub;
on the publishing side.
Under the Hood
All Postgres databases create WAL (write ahead log) as part of normal operations. Logical replication decodes the changes to the database stored in the WAL and creates a decoding. The decoding is stored initially in memory but can be spilled to disk depending on the size. This decoded information is sent and processed by the subscriber. Once it is acknowledged and completed, the publisher can remove the temporary files and advance the slot’s WAL pointer.
Each subscription receives changes via one replication slot. This is a special replication object that exists on the publisher side to allow it to keep track of where the subscriber is in the WAL stream. It also prevents the publisher from prematurely cleaning up its WAL files before all subscribers have acknowledged they've received the data. This means it is critical to monitor replication status and disk space usage to prevent the publisher from running out of disk space.
If the schema does change on the publisher side and the subscriber cannot interpret the change, the publisher holds all WAL until the errors are fixed or skipped. As a rule of thumb, if adding columns it's generally best to first add the changes to the subscriber side then apply them to the publisher. Conversely if dropping columns, first remove from the publisher and then the subscribers. This will ensure that no data is sent that includes columns that are not present.
Note that unlike physical/streaming replication, writes are still possible on the subscriber databases - even to tables that are part of a subscription. This means that conflicts can occur if data is written to the subscriber that would cause logical replication to fail. The cause of the conflict will be noted in the subscriber's logs.
Cone Zone
Since logical replication never gets any schema changes, be careful if/when you’re updating database schema elements. If data cannot be put into the replica slot, the WAL will just fill with logs that never were sent, and then WAL will fill up your disk. This means that if you’re using an ORM to manage database changes, you might end up with a migration that changes your database schema, so you will need to be extra careful. There’s a few migration tools out there for the logical replication use case for each ORM, so check into those.
Checklist:
- Be careful with DDL changes or database migrations
- Monitor replication status
- Monitor disk space or make sure your host is doing that for you
Related Articles
- Postgres Partitioning with a Default Partition
16 min read
- Iceberg ahead! Analyzing Shipping Data in Postgres
8 min read
- PostGIS Day 2024 Summary
8 min read
- Crunchy Data Warehouse: Postgres with Iceberg for High Performance Analytics
8 min read
- Loading the World! OpenStreetMap Import In Under 4 Hours
6 min read