Vanilla Postgres has native partitioning?
Yes! And it's really good!
We frequently get questions like: Can Postgres handle JSON? Can Postgres handle time series data? How scalable is Postgres? Turns out the answer is most usually yes! Postgres, vanilla Postgres, can handle whatever your need is without having to go to a locked in proprietary database. Unless you're really close to the Postgres internals and code releases you might have missed that Postgres natively has partitioning. Our head of product, Craig, recently talked about the advantages of working with vanilla Postgres versus niche products. With that in mind, I wanted to take a step back and go through the basics of vanilla Postgres partitioning.
Crunchy customers on cloud and Kubernetes are often asking about partitioning features and for our general recommendations. For existing data sets, our architects will take a deep dive into their specific use case. Generally speaking though, partitioning is going to be beneficial where data needs to scale - either to keep up with performance demands or to manage the lifecycle of data. Do you need to partition a 200GB database? Probably not. If your're building something new that's likely to scale into the TB or dealing with something in the multi-TB size of data, it might be time to take a peek at native partitioning and see if that can help.
Partitioning use cases
Data lifecycle & cost management
The main benefit of working with partitions is helping with lifecycle management of data. Big data gets really expensive so archiving off data you no longer need can be really important to managing costs. Using partitioning to manage your data lifecycle means you'll be rolling off data to an archive frequently, allowing you to drop/archive tables easily as that data no longer needs to exist in a certain database.
Another benefit that people often look for in terms of partitioning is query performance. Especially when your queries use the indexes or partitioning keys specifically. You can really speed up query times by having things go straight to individual date ranges or sets of data instead of the entire dataset.
Types of Partitioning
There are quite a few different kinds of partitioning based on how you want to subdivide your data.
- Range partitioning is probably the most common and typically used with time or integer series data.
- List partitioning is also popular, especially if you have a database that is easily separated by some kind of common field - like location or a specific piece of data across your entire set.
- Hash partitioning is also available, but should only be used when a clearly defined partition pattern cannot be obtained.
- Composite partitioning would be combining one or more of these, like time based and list partitioning in the same dataset.
Sample Partitioning Setup with Native Postgres
I'm going to fake out a sample data set for an IoT thermostat. My sample for
this is a table containing these fields:
This is kind of a fun little query that will generate quite a bit of data across a 10 day time period.
CREATE TABLE thermostat AS WITH time AS ( SELECT generate_series(now() - interval '10 days', now(), '10 minutes') thetime ), sensors AS ( SELECT generate_series(1,5) as sensor_id ), temp AS ( SELECT thetime, sensor_id, 72 - 10 * cos(2 * pi() * EXTRACT ('hours' from thetime)/24) + random()*10 - 5 AS current_temperature FROM time,sensors ) SELECT thetime, sensor_id, current_temperature::numeric(3,1), CASE WHEN current_temperature < 70 THEN 'heat' WHEN current_temperature > 80 THEN 'cool' ELSE 'off' END AS thermostat_status FROM temp;
Now lets create a new table that will be partitioned
CREATE TABLE iot_thermostat ( thetime timestamptz, sensor_id int, current_temperature numeric (3,1), thermostat_status text ) PARTITION BY RANGE (thetime);
Next create an index on
CREATE INDEX ON iot_thermostat(thetime);
Now create individual partitions
CREATE TABLE iot_thermostat06242022 PARTITION OF iot_thermostat FOR VALUES FROM ('2022-07-24 00:00:000') TO ('2022-07-25 00:00:000'); CREATE TABLE iot_thermostat06242022 PARTITION OF iot_thermostat FOR VALUES FROM ('2022-07-23 00:00:000') TO ('2022-07-24 00:00:000'); # and so on
Insert data into your partition
Now we'll move data from our original
thermostat dataset into the
iot_thermostat and data will automatically go into the correct partitions
INSERT INTO iot_thermostat SELECT * from thermostat
You just need to insert data once, Postgres will take care of moving data to the correct partitions for you.
Quick check on one of the partitions to make sure you got it all right:
select * from iot_thermostat07242022
Ok, so let's say that we only care about data from the last 10 days, so tomorrow
we want to put data from
iot_thermostat07142022 in a different table and
archive it off. This is done by a detach:
ALTER TABLE iot_thermostat DETACH PARTITION iot_thermostat07142022;
And that's now a standalone table.
We need to make a new one for tomorrow as well:
CREATE TABLE iot_thermostat06262022 PARTITION OF iot_thermostat FOR VALUES FROM ('2022-07-26 00:00:000') TO ('2022-07-27 00:00:000');
Obviously, if you're doing this daily, you'll store these in a cron job somewhere so they happen automatically.
Creating Partitions with pg_partman
If you've been around the Postgres world for very long, you're likely to have come across the pg_partman extension, written by my colleague Keith Fiske. Pg_partman existed before native partitioning was introduced in Postgres 10 and was originally based on the concept of triggers for managing data flow to the correct partitions. Native partitioning doesn't use triggers and this is generally thought to be much more performant. Today pg_partman is mostly used for the management and creation of partitions or for users on older versions of Postgres. It can also be used on newer versions of Postgres for easier setup of the tables and automatic managing of the partitions. I'm going to show a quick overview of how to set up tables and partitioning like I did in the demo above using partman.
If you're working with a cloud system, like
CREATE SCHEMA partman; and
CREATE EXTENSION pg_partman SCHEMA partman; and update the
shared_preload_libraries settings. If you're working on a self hosted system,
you'll download from github and
Create parent partitioned table (this is exactly the same as the first example):
CREATE TABLE iot_thermostat_partman ( thetime timestamptz, sensor_id int, current_temperature numeric (3,1), thermostat_status text ) PARTITION BY RANGE (thetime);
Create the partitions
This is done by calling the
create_parent function via partman. By default
this creates 4 partitions ahead, we can create with 10 days of history, giving a
total of 14 partitions.
If you don't create a template table, pg_partman will create one for you at this point. For more information on template tables, see the documentation about child property inheritance.
The cool thing about
create_parent is that this is a one time call of a
function, rather than calling a function every day like my first example. After
you've defined the policies, the background worker you set up during
installation just takes care of this as part of the underlying Postgres
processes. For those in the back, let me say that again, you just call the
function once and partitions are automatically created continuously based on
SELECT partman.create_parent('public.iot_thermostat_partman', 'thetime', 'native', 'daily', p_start_partition := (now() - interval '10 days')::date::text );
Since we only want the most recent 10 days partitions we can set the retention on this table's configuration as so:
UPDATE partman.part_config SET retention = '10 days' WHERE parent_table = 'public.iot_thermostat_partman';
View the created partitions:
postgres= \d+ iot_thermostat_partman Partitioned table "public.iot_thermostat_partman" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description ---------------------+--------------------------+-----------+----------+---------+----------+--------------+------------- thetime | timestamp with time zone | | | | plain | | sensor_id | integer | | | | plain | | current_temperature | numeric(3,1) | | | | main | | thermostat_status | text | | | | extended | | Partition key: RANGE (thetime) Partitions: iot_thermostat_partman_p2022_07_21 FOR VALUES FROM ('2022-07-21 00:00:00+00') TO ('2022-07-22 00:00:00+00'), iot_thermostat_partman_p2022_07_22 FOR VALUES FROM ('2022-07-22 00:00:00+00') TO ('2022-07-23 00:00:00+00'), iot_thermostat_partman_p2022_07_23 FOR VALUES FROM ('2022-07-23 00:00:00+00') TO ('2022-07-24 00:00:00+00'), iot_thermostat_partman_p2022_07_24 FOR VALUES FROM ('2022-07-24 00:00:00+00') TO ('2022-07-25 00:00:00+00'), iot_thermostat_partman_p2022_07_25 FOR VALUES FROM ('2022-07-25 00:00:00+00') TO ('2022-07-26 00:00:00+00'), iot_thermostat_partman_p2022_07_26 FOR VALUES FROM ('2022-07-26 00:00:00+00') TO ('2022-07-27 00:00:00+00'), iot_thermostat_partman_p2022_07_27 FOR VALUES FROM ('2022-07-27 00:00:00+00') TO ('2022-07-28 00:00:00+00'), iot_thermostat_partman_p2022_07_28 FOR VALUES FROM ('2022-07-28 00:00:00+00') TO ('2022-07-29 00:00:00+00'), iot_thermostat_partman_p2022_07_29 FOR VALUES FROM ('2022-07-29 00:00:00+00') TO ('2022-07-30 00:00:00+00'), iot_thermostat_partman_p2022_07_30 FOR VALUES FROM ('2022-07-30 00:00:00+00') TO ('2022-07-31 00:00:00+00'), iot_thermostat_partman_p2022_07_31 FOR VALUES FROM ('2022-07-31 00:00:00+00') TO ('2022-08-01 00:00:00+00'), iot_thermostat_partman_p2022_08_01 FOR VALUES FROM ('2022-08-01 00:00:00+00') TO ('2022-08-02 00:00:00+00'), iot_thermostat_partman_p2022_08_02 FOR VALUES FROM ('2022-08-02 00:00:00+00') TO ('2022-08-03 00:00:00+00'), iot_thermostat_partman_p2022_08_03 FOR VALUES FROM ('2022-08-03 00:00:00+00') TO ('2022-08-04 00:00:00+00'), iot_thermostat_partman_p2022_08_04 FOR VALUES FROM ('2022-08-04 00:00:00+00') TO ('2022-08-05 00:00:00+00'), iot_thermostat_partman_default DEFAULT
Beyond the Basics
Partitioning has a lot of caveats beyond this basic set up I've just shown, so here's some topics and food for thought on future research for your own specific use case:
Subpartitions: For very large data sets, you can actually do nested levels of partitioning with sub-partitioning. This is generally not needed in most cases.
Primary Keys/Unique Indexes: Postgres does not have the concept of an index that covers multiple tables so there can be limitations on primary key/unique index usage in partitioning. In general, the only unique keys you can use are ones that include the partition keys (which could leave out time series and other data types). You may want to look at the template table creation in pg_partman that has some features for handling this.
Null values: Generally if you're using partitioning, you can't have null values in the field that you're partitioning so some thought might need to go into data management and application logic.
Constraints: Pg_partman has some expanded features for handling additional constraints outside the partitioning key
ORMs: The amazing thing about using native partitioning is that it works out of the box with quite a few ORMs, gems, and other tools. Do some testing and research for your specific application stack.
Co-authored with Keith Fiske
July 25, 2022 •More by this author