Examining Postgres Upgrades with pg_upgrade
Postgres is an amazing database system, but it does come with a five-year life cycle. This means you need to perform a major upgrade of it at least every five years. Luckily, Postgres ships with the pg_upgrade
program, which enables a quick and easy migration from one major version of Postgres to another.
Let's work through an example of how to upgrade - in this case, we will go from Postgres 12 to Postgres 16. You should always aim to go to the highest version possible. Check postgresql.org to see what the current version is. If you get stuck, the official documentation has a lot of details.
The pg_upgrade program works by populating an empty Postgres cluster (the new version) based on the data in your existing Postgres cluster. It copies the system catalog information, and then copies or hard links the data files. Even on a large multi-terabyte system, this process can take less than a minute if you use the --link
option! Let's look at the basic steps to run pg_upgrade.
Setup
Before starting, test your application in a development environment against the new version. Read the release notes - you will need to examine the "dot zero" release notes for every major version you are crossing. In our example of moving from 12 to 16, we would want to examine the release notes for Postgres 13.0, 14.0, 15.0, 16.0, and the latest version in the 16.x series. This may actually be the longest step in the process!
The next step is to install the new version. We not only need the new executable and libraries, but must create a new cluster as well, which will become our new instance. Installation depends on your OS, but the general idea is to use the same packages you already have, with a higher version. Here's a recipe to help you do that on a system that supports yum:
# yum list installed 'postgresql12*' | sed -e '1d; s/postgresql12/yum install postgresql16/; s/\..*//'
yum install postgresql16
yum install postgresql16-contrib
yum install postgresql16-libs
yum install postgresql16-server
When the new package versions are in place, we will need to create a new cluster. For that, we will use the initdb command. On some operating systems, it may not even be in your path. On Red Hat inspired systems, check /usr/pgsql‑16/bin/initdb
. Once found, run (as the postgres user) with a --version
argument to make sure it is the target version you are going for (in our example, 16). Then run it with your real arguments to create a new data directory. Take care to use the same arguments used to create the data directory for the old version of Postgres. In particular, make sure that you enable page-level checksums in this new cluster if the old cluster did as well. Because newer versions of the various Postgres executables can handle older versions of Postgres, we can add this location to our path:
export PATH=$PATH:/usr/pgsql-16/bin/
initdb --version
initdb --data-checksums -D /var/lib/pgsql/16
There is no need to start up the new cluster - pg_upgrade will do that itself as part of its process. If you have any custom extensions, or things not handled by your packaging system, install them now as needed. Do not worry yet about the configuration files such as postgresql.conf, those come later.
Verification
The next step is to run the pg_upgrade executable with the --check
argument. This ensures everything looks sane, without actually performing the upgrade. The pg_upgrade program always needs to know the location of four things: the old executables, the new executables, the old data directory, and the new data directory. While you can enter those as command-line arguments, it is cleaner to use environment variables. Let's set them up now and run the check:
export PGBINOLD=/usr/pgsql-12/bin
export PGBINNEW=/usr/pgsql-16/bin
export PGDATAOLD=/var/lib/pgsql/12
export PGDATANEW=/var/lib/pgsql/16
pg_upgrade --check
It will give a detailed report of any incompatibilities found between the two Postgres clusters. What we want to see is this message at the bottom:
*Clusters are compatible*
If you do not see that message, you will need to fix things up. A common reason for failure is checksums, giving a message like this:
old cluster does not use data checksums but the new one does
If that happens, remove your new data directory, and re-run the initdb command, this time WITHOUT the --data-checksums
argument. One can always enable checksums later on.
Backup
This is a great time to take a full backup. If something were to go wrong, we will have a way to go back to the old version. Kick off a full backup via:
pgbackrest backup --stanza=mystanza --type=full
Don't have pgBackRest? It's okay to use another system that does complete, safe, performant backups. If you don't have a backup system, stop your upgrade and get that working first. It is more important than your major upgrade!
Prepare the replicas
If you are using streaming replication, those replicas need upgrading too. The bad news is that we cannot run pg_upgrade on the replica servers. The good news is that the process to upgrade them is tricky, but fast. Their upgrade happens after the primary, but this is an excellent time to prepare things. For each replica, you need to install the new Postgres software, as we did above, but do not run initdb. Copy all the configuration files that exist inside the data directory on each replica to a safe place outside of that directory.
Upgrade
Once the backup and pg_upgrade --check
has completed, we are ready to do the actual upgrade. It is very important to make sure you use the --link
argument to pg_upgrade. What this does is prevents having to copy all the actual data files from the old cluster to the new cluster. Instead, it creates hard links between the old and new data files (not the more common symbolic link). Doing this tells the operating system that each data file has two completely valid paths to get to the file. Also, it is extraordinarily fast compared to copying the file contents. The command looks like this:
## This assumes the ENVs from above were exported
pg_upgrade --link
After you run this, and it starts up the new cluster, there is no easy way to go back to the old version. Hence the backup step above. The good news is that the pg_upgrade program is very battle-tested, so you are unlikely to need to restore your backup. The output of the pg_upgrade command will look similar to this:
Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade.
Once you start the new server, consider running:
/home/greg/pg/16/bin/vacuumdb --all --analyze-in-stages
Running this script will delete the old cluster's data files:
./delete_old_cluster.sh
Note that as of Postgres 14, pg_upgrade will no longer create a shell script to help you rebuild the statistics, but instead suggests doing it via the vacuumdb program as vacuumdb --all --analyze-in-stages
. This will make three runs through each database. The first run is with a statistics target of 1 (very fast, but very rough statistics), then a target of 10 (a little slower, but much better stats), and a final run with the value of default_statistics_target
(which is 100 unless you have changed it).
Copy and adjust configuration files
Now that the new cluster is in place, there are a few more steps before we can connect applications to it.
The new cluster will have default postgresql.conf, pg_hba.conf, and pg_ident.conf files, which are not what you want. Copy over the pg_hba.conf and pg_ident.conf files from your old cluster. They will most likely work without any modification, as the format of those files very rarely changes. Next, you will need to combine your old postgresql.conf file with the new one. If you are in the habit of making your changes at the bottom of the postgresql.conf file, you can easily copy and paste them over. Either way, you should walk through and make sure all the settings still apply to the new cluster. Do not forget about postgresql.auto.conf or any conf files referenced by the "include*``" options.
Turning on checksums
If your new cluster does not have page-level checksums enabled, this is a great time to enable them. Postgres comes with a program to change your cluster to use page-level checksum called pg_checksums. It will be in the same directory as the initdb we used above, and usage is:
pg_checksums -D $PGDATANEW --enable --verbose
This program requires rewriting every file in your cluster, so it may take a while, but it is worth the wait. This rewriting will make the next step (syncing your replicas) take longer as well; you may want to instead recreate the replicas from scratch.
Syncing your replicas
For this part, I am going to refer you to step 11 ("Upgrade streaming replication and log-shipping standby servers") of the official documentation. Follow that rsync recipe exactly. It is possible a better way will exist in the future, so I'd rather leave a link here than copy that large rsync command and print it here. :)
The rsync process is rapid, as it only copies files that have changed. It compares the new and old directories on the primary, and then against the replica. As before, the speed of this process is determined by the number of files. You can upgrade your replicas concurrently.
Startup and generate statistics
Now that the replicas are ready, you can start up the primary. If you are not using pg_ctl but something like systemd, you may need to adjust some config files or scripts to point to the new data directory. If the cluster starts without error, congratulations! You almost have a working cluster. But Postgres cannot generate good query plans without having good statistics about the tables. As of this writing, Postgres does NOT copy the database statistics from the old cluster to the new one. That will change at some point, but until it does, you will need to generate the statistics yourself. As mentioned by the pg_upgrade output we saw above, you can do this in stages to allow statistics as quick as possible with:
vacuumdb --all --analyze-in-stages
If you do not need to do it in stages, you can simply run:
vacuumdb --all --analyze-only
Upgrading extensions
Postgres extensions have versions and may need their own upgrades. The way to do this is to log into the new database and issue:
ALTER EXTENSION foobar UPDATE;
If pg_upgrade is able to detect the extensions, you will see a message like this:
Your installation contains extensions that should be updated
with the ALTER EXTENSION command. The file
update_extensions.sql
when executed by psql by the database superuser will update
these extensions.
That file should contain commands, but you can also generate a list of extensions that may need upgrading by running:
SELECT * FROM pg_available_extensions WHERE default_version <> installed_version;
A successful upgrade will return no message at all, while an unneeded one will complain:
alter extension hstore update;
ALTER EXTENSION
alter extension hstore update;
NOTICE: version "1.8" of extension "hstore" is already installed
ALTER EXTENSION
Upgrade monitoring and tools
Do not forget to update everything that interacts with your database, and tell it that you are on a new major version. You also may have a new data directory, which certain tools need to know about. Remember that a major upgrade means a new cluster, with a new system identifier, as well as fresh stats.
Run another backup
This is a great time to run another backup, this time against the new version.
pgbackrest backup --stanza=mystanza --type=full
Remove the old cluster
After everything is running well for a while, you can remove the old cluster. At the end of its run, pg_upgrade creates a small shell script to help with this. This script is very simple and does a "rm -fr $OLDDATADIR". Better to run that script, no matter how simple it seems, than to risk a slip of the fingers and remove the wrong data directory!
How long does pg_upgrade take?
Alright, I hear you ask, we know HOW to do it, but HOW LONG will it take? The answer, if using the recommended --link
mode, depends on the number of files in your data directory. This in turn, depends on how big the tables are (larger tables mean multiple files, as Postgres breaks tables on disk into 1GB files), whether each main table requires TOAST tables, and how many indexes there are. But for a rough estimate, the number of tables will suffice. I ran some testing for my Postgres 12 to Postgres 16 upgrade and here are the results:
Number of tables | Total upgrade time |
---|---|
128 | 7 seconds |
1000 | 12 |
10,000 | 64 |
20,000 | 145 (2:25) |
40,000 | 267 (4:27) |
80,000 | 507 (8:27) |
100,000 | 651 (10:51) |
150,000 | 989 (16:43) |
200,000 | 1328 (22:08) |
Using some simple math, we find that for every ten thousand tables we add, the upgrade process takes another minute. Your experience will be different depending on your OS and hardware, but it is simple enough to test and get a ballpark number. Here's what I used:
## On the old version:
psql -c 'do $$ begin for x in 1..10000 loop execute format($i$create table a%s()$i$, x); end loop; end; $$;'
psql -c 'do $$ begin for x in 1..10000 loop execute format($i$create table ab%s()$i$, x); end loop; end; $$;'
## Then time the pg_upgrade
Going over 10,000 in a single loop leads to issues, even with max_locks_per_transaction cranked up, so duplicate the line as needed. You can test the upgrade with 10,000 tables, then with 20,000 tables, to generate a rough guide of extra time per ten thousand tables - and can extrapolate from there. You can grab the total number of relations like so:
SELECT count(*) FROM pg_class;
Sum up as needed for all your databases. You can also check the number of files from the command-line by counting all files under the base
directory in the data directory of your current version:
find $OLDDATADIR/base -type f | wc -l
Prepare for your next upgrade
The final order of business - prepare for your next upgrade! A new major version of Postgres gets released every year. So upgrading your major version each year, or every other year, is a good solution. Roughly every three months, a new minor version gets released. You want to put these in place right away, as they only contain critical bug fixes and are much easier to install than the process above!
Here's a quick refresher on the Postgres versions 14 through 17 release timings:
Product notes
- If you're working with Crunchy Postgres for Kubernetes, see the major version upgrade documentation which also uses pg_upgrade.
- Crunchy Bridge customers, major upgrade is built in and managed for you.
Related Articles
- Postgres Partitioning with a Default Partition
16 min read
- Iceberg ahead! Analyzing Shipping Data in Postgres
8 min read
- PostGIS Day 2024 Summary
8 min read
- Crunchy Data Warehouse: Postgres with Iceberg for High Performance Analytics
8 min read
- Loading the World! OpenStreetMap Import In Under 4 Hours
6 min read