File system level backup

9.2. File system level backup

An alternative backup strategy is to directly copy the files that PostgreSQL uses to store the data in the database. In Section 3.2 it is explained where these files are located, but you have probably found them already if you are interested in this method. You can use whatever method you prefer for doing usual file system backups, for example

tar -cf backup.tar /usr/local/pgsql/data

There are two restrictions, however, which make this method impractical, or at least inferior to the pg_dump method:

  1. The database server must be shut down in order to get a usable backup. Half-way measures such as disallowing all connections will not work as there is always some buffering going on. For this reason it is also not advisable to trust file systems that claim to support "consistent snapshots". Information about stopping the server can be found in Section 3.6.

    Needless to say that you also need to shut down the server before restoring the data.

  2. If you have dug into the details of the file system layout you may be tempted to try to back up or restore only certain individual tables or databases from their respective files or directories. This will not work because the information contained in these files contains only half the truth. The other half is in the commit log files pg_clog/*, which contain the commit status of all transactions. A table file is only usable with this information. Of course it is also impossible to restore only a table and the associated pg_clog data because that will render all other tables in the database cluster useless.

Also note that the file system backup will not necessarily be smaller than an SQL dump. On the contrary, it will most likely be larger. (pg_dump does not need to dump the contents of indexes for example, just the commands to recreate them.)

© Copyright 2003-2023 www.php-editors.com. The ultimate PHP Editor and PHP IDE site.