Contents
To activate auditing for an entire database schema simply run the init
function:
sql
SELECT pgmemento.init(
schemaname := 'data', -- default is 'public'
audit_id_column_name := 'audit_trail_id', -- default is 'pgmemento_audit_id'
log_old_data := TRUE, -- default is true
log_new_data := TRUE, -- default is false
log_state := TRUE, -- default is false
trigger_create_table := TRUE -- default is false
except_tables := ARRAY['table_xyz'] -- default is empty
);
After initialization, the schema is registered in the audit_schema_log
table. You can also use the interactive INIT.sql
script from a shell environment with the psql client.
psql -h localhost -p 5432 -U my_user -d my_database -f INIT.sql
You will face the follwing prompts: 1. Specify the target database schema you want to log. 2. Choose to log new values on changes as JSON. y for yes, n for no. Default is no. 3. Decide to log already existing data as inserted (again y or n, default no). This useful to have a proper baseline for the audit trail. 4. Choose if newly created tables shall be enabled for auditing automatically (again y or n, default no). 5. Define a set of tables you want to exclude from auditing (comma-separated list).
Note: The interactive script doesn't let you choose the audit_id_column_name
and if logging old values should be turned off because it's strongly recommended as of version v0.7 to use the default behavior. For example, the RESTORE API only works on behalf of the old_data
column in the row_log table. Reusing existing ID columns for the audit trail is not yet supported.
Altering the logging behavior
If you've already initialized auditing but find yourself wanting to enable logging new data or feeling a different for the tracer column than pgmemento_audit_id
would be better, you can call the pgmemento.reinit
endpoint. It has the same arguments than init
and will simply drop and create auditing for a given schema without logging data, but updating the audit_schema_log
. Note, that this process can take a few seconds.
Start auditing for single tables
Auditing can also be started manually for single tables - e.g. in case they were excluded first on init - using the following function, which adds an additional tracer column to the table and creates triggers that are fired during DML changes. Note
sql
SELECT pgmemento.create_table_audit(
tablename := 'table_A',
schemaname := 'public',
audit_id_column_name := 'audit_trail_id',
log_old_data := TRUE,
log_new_data := TRUE,
log_state := TRUE
);
With the last argument you define, if existing data is logged or not. For each row in the audited tables another row will be written to the row_log
table telling the system that it has been 'inserted' at the timestamp the procedure has been executed. Depending on the amount of data that has to be defined as INSERTed this process can take a while. By passing FALSE for the last argument nothing is logged in the first place. If you change your mind later, you can still call pgmemento.log_table_baseline
(or pgmemento.log_schema_baseline
). But for a clean baseline you should do it before the first queries hit the table, because rows with existings logs will be excluded from the baseline.
Hint: When setting up a new database I would recommend to start pgMemento after bulk imports. Otherwise the import will be slower and several different timestamps might appear in the transaction_log
table.
Create pgMemento's event triggers
If the init
function or script has not been used event triggers can be created by calling the following procedure:
sql
SELECT pgmemento.create_schema_event_trigger(TRUE);
With TRUE
an additional event trigger for CREATE TABLE
, CREATE TABLE AS
and SELECT INTO
events is created. The logging behavior and the name for the audit_id
column for newly created tables is picked from the audit_schema_log
.