Create PostgreSQL Clusters

Scaling PostgreSQL on FreeBSD systems

Posted on January 27, 2021

Creating a Master-Slave cluster of PostgreSQL servers, with hot standbys, on FreeBSD machines to scale application server demands. This is assuming you are turning an existing database into a cluster, but it can be used for situations where that's not the case. This guide was written with PostgreSQL 9.5 in mind, but there is no reason why <12 shouldn't follow the same steps. If you're looking for steps on 12 or higher you should probably look elsewhere; the need for the recovery.conf file has been removed.

Setting up the Master:

We'll start off by setting up the Master server in our cluster. You'll want to start by creating a user with replication privileges:

  1. Connect to the postgres database of your server.
  2. Run the following command: CREATE USER replicator REPLICATION PASSWORD '<your secret password here>'

Next we want to edit our postgresql.conf file which is stored in the data folder. On FreeBSD systems, this should be at /usr/local/pgsql/data/. The following edits should be made:

  1. Add your local IP address as a listening address: listen_addresses = 'localhost, [Master-local-IP]'
  2. Edit wal_level to be wal_level = hot_standby
  3. Edit archive_mode to be archive_mode = on
  4. Edit archive_command to be archive_command = 'test ! -f /usr/local/pgsql/data/archivedir/%f && cp %p /usr/local/pgsql/data/archivedir/%f' (assuming you are on a FreeBSD system; if not, change the leading directories to your data folder)
  5. Edit max_wal_senders to be max_wal_senders = 3

Next we need to edit our pg_hba.conf file (which is also stored in the same data folder). Here you just want to add a new entry to the # Allow replication connections section:

host replication replicator [Slave-IP]/32 md5

Where 'replicator' (in the third column) is our user that we made earlier with replication privileges.

You can restart the Master server and everything should be alright! Moving on to the Slave server.

Setting up the Slave:

First things first, turn your PostgreSQL server off by using service postgresql stop

Next you'll need to run a command called pg_basebackup. Here are the arguments that you need to pass in:

pg_basebackup -h [Master-local-IP] -p 5432 -U replicator -D [backup_directory] -Fp -Xs -P

Edit the postgres.conf file and edit hot_standby to be hot_standby = on

Create a recovery.conf file, you can find the template for this file here. You should edit the following lines:

  • standby_mode = on
  • primary_conninfo = 'host=[Master-local-IP] port=5432 user=replicator password=[password]'
  • trigger_file = '/tmp/pgsql-trigger'

You will now want to create /tmp/pgsql-trigger and type smart in it for smart hot standby, and fast for fast hot standby. You can read about the differences here, but a short summary according to the PostgreSQL documentation:

Smart Failover: In smart failover, the server is brought up after applying all WAL files available in the archive. This results in zero data loss, even if the standby server has fallen behind, but if there is a lot of unapplied WAL it can be a long time before the standby server becomes ready.

Fast Failover: In fast failover, the server is brought up immediately. Any WAL files in the archive that have not yet been applied will be ignored, and all transactions in those files are lost.

Now you can turn your PostgreSQL server back on using service postgresql start


Your cluster should be working now! Start your testing by creating a new table on your Master server and checking if it pops up with \d.

If it does, you're set! You can play around with adding some rows, but all should be working. Next steps would be to repeat the Slave process for each Slave instance in your cluster and set up the cluster for high availability.

Perhaps the next article? Let's see if I can figure it out.