Managing and migrating PostgreSQL on premise with Zercurity.

By default the dockerised version of Zercurity provides a self contained PostgreSQL container as its primary database. Once your deployment starts to grow. The initial configuration of PostgreSQL will start to become a bottle neck. This post will go into detail about configuring PostgreSQL to provide your database with more resources as well as providing either an external self-hosted or managed database solution including the replication or migration of data.

Configuring PostgreSQL

Configuring the Zercurity PostgreSQL docker container can be done one of two ways. Either by updating the environment variables parameter in the docker-compose file like so. Or within the production.env file which will persist Zercurity updates. Zercurity inherits the bitnami/postgresql container.

Any bitnami parameters can be added directly to the production.env without changing the docker-compose.yml.

The second way is to modify, or provide your own bespoke PostgreSQL configuration file. If you’ve not configured PostgreSQL before. There are a few websites that’ll help you construct a configuration file automatically.

Once you’ve got your new configuration file. We need to let the docker-compose file know about the configuration file in order to override the one used by Zercurity.

By default the PostgreSQL container is configured to override settings provided by any configuration files placed or mounted within the /etc/postgresql/conf.d/path within the container. This configuration file will be loaded upon a restart of the container. You can mount your configuration file like so:

This will also override the default zercurity.conf which also resides within the conf.d direcotry.

Using managed or external PostgreSQL services

If you need to use an external managed database for Zercurity. You can update the /var/lib/zercurity/production.env file with the details of your new database. Zercurity will automatically initialize the new database. When docker-compose restarts. Please see our section below on migration options if you’re moving from an existing database.

Your managed database will require the following extensions installed in order for Zercurity to correctly set-up the database. Otherwise, the migrations container will fail. The logs for which can be seen using the command docker logs -f zercurity_prod_migrations

AWS RDS supports all the listed extensions. If you’re using Debian most of the extensions will already be installed. The only one you’ll be missing is the ip4r extension which you can install like so:

Migrating Zercurity to a new PostgreSQL database

There are three primary ways to migrate Zercurity to another database either using the pg_dump and pg_restore command to copy the database from one server to another. Or if you’re performing a live migration then you have the option to replicate the database or use a tool like Bucardo.

Migrating using pg_dump

The simplest way to migrate Zercurity to a new database is using pg_dump. This will take a snapshot of the database and allow you to restore it to your new database server. This can take up-to several hours to complete depending on the size of your database.

This is also used for major PostgreSQL upgrades. Given the size of the database the command below makes use pg_dump’s job feature to dump tables in parallel -Fd. The number of jobs is specified by the -j flag. The dump_dir will be the output path for your dump.

Once completed you can then restore it like so:

Once the restore process has completed simply update the production.env file with the new database information and Zercurity will now use the database server. You can then remove the PostgreSQL container from the docker-compose configuration file.

Grafana and Prometheus should still continue to work as they’ll update themselves using the new environment variables.

Migrating using replications

To avoid hours of downtime you can make use of PostgreSQL’s replication functionality to replicate the data from your primary database sever to a secondary one.

Once the two databases are more or less in sync. You can notify Zercurity of the new database sever. Causing new database changes to be sent to the new server rather than being replicated across. In order to accomplish this there are several configuration changes that need to be made to both our existing server and new database server.

As with the section above we need to add some additional configuration parameters to the docker-compose.yml or production.env .

If you’re using another container system you can make use of the bitnami/postgresql:latest container to create a replica:

If you’re simply looking to create a read replica or a hot standby make sure the networks parameter is set to provide “local” connectivity. Otherwise, in the MASTER_HOST parameter specify the external hostname of the Zercurity container hostname.

For an external host go ahead and install PostgreSQL. Once done stop the service and remove the default configuration files. As we’re going to use pg_basebackup to copy across our Zercurity configuration.

Then lets sync across our database and its configuration to replace the deleted files.

You can check that the directory we cleared earlier will now contain our Zercurity configuration.

If all looks good. Start back up replica postgres instance.

Back on our primary docker instance. Lets check the replication status. Using the command below. If our replica PostgreSQL server is connecting correctly you’ll see the hostname of the server as shown above postgresql_replica_1:

You can also test the status of the replica node with the following command:

As the replicate database continues to replicate data from the primary host. There will come a point at the end of the day or a window where there are fewer requests being sent to the backend where you wish to switch over from the primary database. To complete the migration simply update the production.env with the new database details and remove the postgres service from the docker-compose.yml .Both these files can be found /var/lib/zercurity/ . You can then restart the zercurity service.

Migration using Bucardo

Bucardo is an asynchronous PostgreSQL replication system, allowing for multi-source, multi-target operations. This method allows for primary-primary write replication allowing for a zero downtime migration. Bucardo is already installed within the Zercurity container. You just need to ensure its installed on your server too.

From the secondary server you need to copy across the database structure of the primary database. This can also be done just by running zercurity on the secondary host and letting the migrations container initialize the database. Otherwise, pg_dump can be used to copy over the table schema.

Once the table structure has been copied over we can now configure bucadro. This is being done in a primary-primary configuration.

Firstly, let bucardo know about the existing database server primary_zercurity (Zercurity’s docker instance) and your_server.

Next we’re going to let bucardo know we want to synchronize the following:

Then we’re going to instantiate the configuration in a primary-primary configuration. Marking both database servers as “sources”.

Lastly, create the sync task and start bucardo.

The status can then be monitored with the command:

In the event that servers are under heavy load. There are some tables that are heavily written to such as the processes table. You can end up in a situation where there can by id conflicts. As both databases are actively being written too.

To mitigate this and provide a little more breathing room for these tables you can alter the sequence id within PostgreSQL to be incremented by 2 instead of 1.

Its all over!

We hope you found this helpful. Please feel free to get in touch if you have any questions.

Real-time security and compliance delivered.