Contents
- make_data_archive()
- finish_pgdata_backup()
- make_xlog_archive()
- wait_for_xlog_archive_to_be_ready()
- compress_xlogs()
- _find_interesting_xlogs()
- uncompress_wal_archive_segments()
- wait_for_checkpoint_location_change()
- make_backup_label_temp_file()
- get_backup_label_from_master()
- wait_for_checkpoint_from_backup_label()
- convert_wal_location_and_timeline_to_filename()
- compress_pgdata()
- pause_xlog_removal()
- unpause_xlog_removal()
- DESTROY()
- read_args_specification
- read_args_normalization
- validate_args()
- POD ERRORS
make_data_archive()
Wraps all work necessary to make local .tar files (optionally compressed) with content of PGDATA
finish_pgdata_backup()
Calls pg_stop_backup on master (if necessary), and waits for xlogs to be ready
make_xlog_archive()
Wraps all work necessary to make local .tar files (optionally compressed) with xlogs required to start PostgreSQL from backup.
wait_for_xlog_archive_to_be_ready()
Waits till all necessary xlogs will be in archive, or (in case --call-master was not given) - for checkpoint on slave.
compress_xlogs()
Wrapper function which encapsulates all work required to compress xlog segments that accumulated during backup of data directory.
_find_interesting_xlogs()
Internal function that scans source path, and returns arrayref of filenames (without paths) that are xlogs withing interesting wal_range.
uncompress_wal_archive_segments()
In case walarchive (--source option) is compressed, omnipitr-backup-slave needs to uncompress files to temp directory before making archive - so that the archive will be easier to use.
This work is being done in this function.
Make SEGMENT.OFFSET.backup file that will be included in xlog archive.
This file contains vital information like start and end position of WAL reply that is required to get consistent state.
wait_for_checkpoint_location_change()
Just like the name suggests - this function periodically (every 5 seconds, hardcoded, as there is not much sense in parametrizing it) checks pg_controldata of PGDATA, and finishes if value in Latest checkpoint location will change.
make_backup_label_temp_file()
Normal hot backup contains file named 'backup_label' in PGDATA archive.
Since this is not normal hot backup - PostgreSQL will not create this file, and it has to be created separately by omnipitr-backup-slave.
This file is created in temp directory (it is not created in PGDATA), and is included in tar in such a way that, on uncompressing, it will get to unarchived PGDATA.
If --call-master was given, it will run pg_start_backup() on master, and retrieve generated backup_label file.
get_backup_label_from_master()
Wraps logic required to call pg_start_backup(), get response, and backup_label file content .
wait_for_checkpoint_from_backup_label()
Waits till slave will do checkpoint in at least the same location as master did when pg_start_backup() was called.
convert_wal_location_and_timeline_to_filename()
Helper function which converts WAL location and timeline number into filename that given location will be in.
compress_pgdata()
Wrapper function which encapsulates all work required to compress data directory.
pause_xlog_removal()
Creates trigger file that will pause removal of old segments by omnipitr-restore.
unpause_xlog_removal()
Removed trigger file, effectively unpausing removal of old, obsolete log segments in omnipitr-restore.
DESTROY()
Destructor for object - removes created pause trigger;
read_args_specification
Defines which options are legal for this program.
read_args_normalization
Function called back from OmniPITR::Program::read_args(), with parsed args as hashref.
Is responsible for putting arguments to correct places, initializing logs, and so on.
validate_args()
Does all necessary validation of given command line arguments.
One exception is for compression programs paths - technically, it could be validated in here, but benefit would be pretty limited, and code to do so relatively complex, as compression program path might, but doesn't have to be actual file path - it might be just program name (without path), which is the default.
POD ERRORS
Hey! The above document had some coding errors, which are explained below:
- Around line 225:
-
Unknown directive: =head