Contents
OmniPITR - omnipitr-backup-slave
USAGE
/some/path/omnipitr/bin/omnipitr-backup-slave [options]
Options:
- --data-dir (-D)
-
Where PostgreSQL datadir is located (path)
- --database (-d)
-
Which database to connect to to issue required SQL queries. Defaults to template1.
This is used only when --call-master is given.
- --host (-h)
-
Which host to connect to when connecting to database to backup. Shouldn't really be changed in 99% of cases. Defaults to empty string - i.e. use UNIX sockets.
This is used only when --call-master is given.
- --port (-P)
-
Which port to connect to when connecting to database. Defaults to 5432.
This is used only when --call-master is given.
- --username (-U)
-
What username to use when connecting to database. Defaults to postgres.
This is used only when --call-master is given.
- --source (-s)
-
Where are WAL files delivered by omnipitr-archive and used by omnipitr-restore
You can also specify compression for source. Check COMPRESSION section of the doc.
- --dst-local (-dl)
-
Where to copy the hot backup files on current server (you can provide many of these).
You can also specify compression per-destination. Check COMPRESSION section of the doc.
- --dst-remote (-dr)
-
Where to copy the hot backup files on remote server. Supported ways to transport files are rsync and rsync over ssh. Please see DESCRIPTION for more information (you can provide many of these)
You can also specify compression per-destination. Check COMPRESSION section of the doc.
- --dst-direct (-dd)
-
Specifies remote location to be used to store backup.
Please check DIRECT-DESTINATION part for more details.
- --dst-pipe (-dp)
-
Specifies path to program that should receive whole backup on stdin.
The program will be called multiple times (for each file separately). Name of the file will be given as first, and only, argument to the program.
If the file is to be compressed (using standard compression=/path syntax) - given file name will contain compression extension.
- --temp-dir (-t)
-
Where to create temporary files (defaults to /tmp or $TMPDIR environment variable location)
- --log (-l)
-
Name of logfile (actually template, as it supports %% strftime(3) markers. Unfortunately due to the %x usage by PostgreSQL, We cannot use %% macros directly. Instead - any occurence of ^ character in log dir will be first changed to %, and later on passed to strftime.
Please note that on some systems (Solaris for example) default shell treats ^ as special character, which requires you to quote the log filename (if it contains ^ character). So you'd better write it as:
--log '/var/log/omnipitr-^Y-^m-^d.log'
- --filename-template (-f)
-
Template for naming output files. Check FILENAMES section for details.
- --pid-file
-
Name of file to use for pidfile. If it is specified, than only one copy of omnipitr-backup-slave (with this pidfile) can run at the same time.
Trying to run second copy of omnipitr-backup-slave will result in an error.
- --removal-pause-trigger (-p)
-
Name of file to be created to pause removal of old/obsolete segments. Should be the same as condigured removal-pause-trigger for omnipitr-restore.
This can be skipped if source of xlogs (-s) that omnipitr-backup-slave is using, is not used by omnipitr-restore running with -r option.
I.e. when you don't have -r option to omnipitr-restore, or you're using special walarchive for your backups - you can skip the --removal-pause-trigger.
- --call-master (-cm)
-
If this option is given, omnipitr-backup-slave will issue
SELECT pg_start_backup( '...' );
and
SELECT pg_stop_backup()
on master database.
Backups on PostgreSQL 9.0+, on slave, without --call-master can fail when later used.
- --parallel-jobs (-PJ)
-
Number of parallel jobs that omnipitr-backup-slave can spawn to deliver archives to remote destinations, or to decompress wal segments from compressed wal archive.
- --verbose (-v)
-
Log verbosely what is happening.
- --not-nice (-nn)
-
Do not use nice for compressions.
- --digest (-dg)
-
Digest method to use (eg MD5 or SHA-1) for checksumming. Can be a comma seperated list to use multiple digest algorithms.
For details please check CHECKSUMMING below.
- --skip-xlogs (-sx)
-
Make omnipitr-backup-slave skip creation of xlog tarball - only data tarball will be created. This is useful if you have set proper walarchive.
- --gzip-path (-gp)
-
Full path to gzip program - in case you can't set proper PATH environment variable.
- --bzip2-path (-bp)
-
Full path to bzip2 program - in case you can't set proper PATH environment variable.
- --lzma-path (-lp)
-
Full path to lzma program - in case you can't set proper PATH environment variable.
- --lz4-path (-ll)
-
Full path to lz4 program - in case you can't set proper PATH environment variable.
- --xz-path (-xz)
-
Full path to xz program - in case you can't set proper PATH environment variable.
- --nice-path (-np)
-
Full path to nice program - in case you can't set proper PATH environment variable.
- --tar-path (-tp)
-
Full path to tar program - in case you can't set proper PATH environment variable.
- --tee-path (-ep)
-
Full path to tee program - in case you can't set proper PATH environment variable.
- --psql-path (-sp)
-
Full path to psql program - in case you can't set proper PATH environment variable.
- --rsync-path (-rp)
-
Full path to rsync program - in case you can't set proper PATH environment variable.
- --pgcontroldata-path (-pp)
-
Full path to pg_controldata program - in case you can't set proper PATH environment variable.
- --ssh-path (-ssh)
-
Full path to ssh program - in case you can't set proper PATH environment variable.
- --shell-path (-sh)
-
Full path to shell to be used when calling compression/archiving/checksumming.
It is important becaus the shell needs to support >( ... ) constructions.
One of the shells that do support it is bash, and this is the default value for --shell-path. You can substitute different shell if you're sure it supports mentioned construction.
- --remote-cat-path (-rcp)
-
Path to cat program, to be run on remote side, when using direct destinations.
Defaults to "cat"
It will be used as command on remote machine, as:
ssh host 'cat - > /some/path/filename-from-template'
In case remote-cat-path would be given with ! as the first character, it will change the invocation on remote machine, to pass filename as argument. For example:
-rcp '!/usr/local/bin/store-backup'
will run:
ssh host '/usr/local/bin/store-backup /some/path/filename-from-template'
- --version (-V)
-
Prints version of omnipitr-backup-slave, and exists.
- --help (-?)
-
Prints this manual, and exists.
- --config-file (--config / --cfg)
-
Loads options from config file.
Format of the file is very simple - each line is treated as argument with optional value.
Examples:
--verbose --host 127.0.0.1 -h=127.0.0.1 --host=127.0.0.1
It is important that you don't need to quote the values - value will always be up to the end of line (trailing spaces will be removed). So if you'd want, for example, to have magic-option set to "/mnt/badly named directory", you'd need to quote it when setting from command line:
/some/omnipitr/program --magic-option="/mnt/badly named directory"
but not in config:
--magic-option=/mnt/badly named directory
Empty lines, and comment lines (starting with #) are ignored.
DESCRIPTION
Running this program should be done by cronjob, or manually by database administrator.
As a result of running it there are 2 files, usually named HOST-data-YYYY-MM-DD.tar and HOST-xlog-YYYY-MM-DD.tar. These files can be optionally compressed and delivered to many places - both local (on the same server) or remote (via rsync).
Which options should be given depends only on installation, but generally you will need at least:
--data-dir
Backup will process files in this directory.
--log
to make sure that information is logged someplace about archiving progress
one of --dst-local or --dst-remote
to specify where to send the backup files to
Of course you can provide many --dst-local or many --dst-remote or many mix of these.
Generally omnipitr-backup-slave will try to deliver WAL segment to all destinations. In case remote destination will fail, omnipitr-backup-slave will retry 3 times, with 5 minute delay between tries.
In case of errors when writing to local destination - it is skipped, and error is logged.
Backups will be transferred to destinations in this order:
- 1. All local destinations, in order provided in command line
- 2. All remote destinations, in order provided in command line
Remote destination specification
omnipitr-backup-slave delivers backup files to destination using rsync program. Both direct-rsync and rsync-over-ssh are supported (it's better to use direct rsync - it uses less resources due to lack of encryption.
Destination url/location should be in a format that is usable by rsync program.
For example you can use:
rsync://user@remote_host/module/path/
host:/path/
To allow remote delivery you need to have rsync program. In case you're using rsync over ssh, ssh program has also to be available.
In case your rsync/ssh programs are in custom directories simply set $PATH environemnt variable before starting PostgreSQL.
DIRECT-DESTINATION
In some cases the overhead of creating local tarball with backup, can be too much of a burden for the system.
To make the backup generation as light on resources as possible, direct destinations have been added.
When using them, omnipitr-backup-slave doesn't create any local tarballs.
Instead, output from tar (after optional compression) is sent directly to remote machine over SSH connection.
Example data flow:
tar cf - . | gzip -c - | ssh user@host 'cat - > /some/file'
In case you'd like to use compression, but do it on remote machine, you can provide --remote-cat-path starting with !, and point it to script that does compression, and stores to proper file.
For example, calling omnipitr-backup-slave with following options:
-dd user@host:/var/backups -rcp '!/usr/local/bin/gzip-and-store'
Will run (a bit simplified example):
tar cf - . | ssh user@host '/usr/local/bin/gzip-and-store /var/backups/filename-from-template'
And then you can write this simplistic script as /usr/local/bin/gzip-and-store:
#!/usr/bin/env bash
gzip -c - > "$1".gz
Which will make tarball on database server (it has to be run here, since that's where the files are), then transfer the tarball over ssh to remote machine, compress it there and store in /var/backups/.
Of course --dst-direct can be compressed locally, using the same syntax as with --dst-local or --dst-remote, i.e. by using "xxx=" prefix, which is described in more details in COMPRESSION part.
COMPRESSION
Every destination (and source of xlogs) can have specified compression. To use it you should prefix destination path/url with compression type followed by '=' sign.
Allowed compression types:
gzip
Compresses with gzip program, used file extension is .gz
bzip2
Compresses with bzip2 program, used file extension is .bz2
lzma
Compresses with lzma program, used file extension is .lzma
lz4
Compresses with lz4 program, used file extension is .lz4
xz
Compresses with xz program, used file extension is .xz
If you want to pass any extra arguments to compression program, you can either:
make a wrapper
Write a program/script that will be named in the same way your actual compression program is named, but adding some parameters to call
use environment variables
All of supported compression programs use environment variables:
gzip - GZIP
bzip2 - BZIP2
lzma - XZ_OPT
For details - please consult manual to your choosen compression tool.
It is strongly suggest to use only 1 compression method for all destinations
CHECKSUMMING
OmniPITR can (since version 0.2.0) calculate checksums of created files.
To calculate the checksums, OmniPITR uses Digest Perl module (part of standard Perl distribution).
Digest module supports (now) 5 different types of checksums:
MD5 - standard md5 algorithm
SHA-1 - SHA-1 algorithm
SHA-256 - SHA-2 algorithm with hash size of 256 bits
SHA-384 - SHA-2 algorithm with hash size of 384 bits
SHA-512 - SHA-2 algorithm with hash size of 512 bits
If you'll choose to use checksums, for every type of checksum (you can specify --digest=MD5,SHA-512) there will be one additional file created, named just like data and xlog tarbals, but with __FILETYPE__ part of filename (details in FILENAMES) changed to digest name.
So, with filename template being __FILETYPE__.tar__CEXT__, gzip compression and MD5 checksumming, you will get 3 files:
data.tar.gz
xlog.tar.gz
MD5.tar.gz
It is important to understand that the checksum file is plain text, and the parts of its name that suggest tar.gz as just "leftovers" from filename template.
After creation, such checksum file can be verified with:
md5sum -c MD5.tar.gz
FILENAMES
Naming of files for backups might be important depending on deployment.
Generally, generated filenames are named using templates, with default template being:
__HOSTNAME__-__FILETYPE__-^Y-^m-^d.tar__CEXT__
Within template (specified with --filename-template option) you can use following markers:
__HOSTNAME__
Name of server backup is made on - as reported by hostname(1) program.
__FILETYPE__
It is actually required to have __FILETYPE__ - it specifies whether the file contains data (data) or xlog segments (xlog)
__CEXT__
Based on compression algorithm choosen for given delivery. Can be empty (no compression), or contains dot (.) and literal extension associated with choosen compression program.
any ^? markers
like in strftime(3) call, but ^ will be first changed to %.
Filename template is evaluated at start, so any timestamp (^? markers) will relate to date/time of beginning of backup process.
TABLESPACES
If omnipitr-backup-slave detects additional tablespaces, they will be also compressed to generated tarball.
Since the full path to the tablespace directory is important, and should be preserved, and normally tar doesn't let you store files which path starts with "/" (as it would be dangerous), omnipitr-backup-slave uses the following approach:
all tablespaces will be stored in tar, and upon extraction they will be put in the directory "tablespaces", and under it - there will be the full path to the tablespace directory.
For example:
Assuming PostgreSQL PGDATA is in /var/lib/pgsql/data, and it has 3 extra tablespaces placed in:
/mnt/san/tablespace
/home/whatever/xxx
/media/ssd
generated DATA tarball will contain 2 directories:
data - copy of /var/lib/pgsql/data
tablespaces - which contains full directory structure leading to:
tablespaces/mnt/san/tablespace - copy of /mnt/san/tablespace
tablespaces/home/whatever/xxx - copy of /home/whatever/xxx
tablespaces/media/ssd - copy of /media/ssd
Thanks to this approach, if you'll create symlink "tablespaces" pointing to root directory (ln -s / tablespaces) before exploding tarball - all tablespace files will be created already in the correct places. This is of course not necessary, but will help if you'd ever need to recover from such backup.
EXAMPLES
Minimal setup, with copying file to local directory:
/.../omnipitr-backup-slave -D /mnt/data -l /var/log/omnipitr/backup.log -dl /mnt/backups/ -s /mnt/walarchive
Minimal setup, with compression, and copying file to remote directory over rsync:
/.../omnipitr-backup-slave -D /mnt/data/ -l /var/log/omnipitr/backup.log -s /mnt/walarchive -dr bzip2=rsync://slave/postgres/backups/
2 remote, compressed destinations, 1 local, with auto rotated logfile, and modified filenames, compressed wal archive.
/.../omnipitr-backup-slave -D /mnt/data/ -s gzip=/mnt/walarchive -l /var/log/omnipitr/backup-^Y-^m-^d.log -dr bzip2=rsync://slave/postgres/backups/ -dr gzip=backups:/mnt/hotbackups/ -dl /mnt/backups/ -f "main-__FILETYPE__-^Y^m^d_^H^M^S.tar__CEXT__"
IMPORTANT NOTICES
If you're using compressed source dir (wal archive) - omnipitr-backup-slave has to uncompress all of xlogs before putting them in .tar.XX. This means that you should have enough free disk space for this purpose in the place where you create temporary directories. Alternarively - do not use compression for sending WAL segments to standby server - in this case - decompression will not be necessary.
omnipitr-backup-slave compresses whole source directory - i.e. all files from there. This means that if you're using delay recovery (-w option to omnipitr-restore) - there will be more files there, and backup will be larger. This is especially important if you're using compressed wal archive (please see note above)
If you're using omnipitr-backup-slave on PostgreSQL 9.0+ you have to use --call-master. Otherwise created backup might fail when used as base for another replication slave that later on is promoted to standalone.
COPYRIGHT
The OmniPITR project is Copyright (c) 2009-2013 OmniTI. All rights reserved.