Extensions
- pg_kaboom 0.0.1
- Blow things up in interesting and useful^W^W ways
README
pg_kaboom
Where's the kaboom?! There's supposed to be an Earth-shattering kaboom!
This extension serves to crash postgresql in multiple varied and destructive ways.
But why?
Testing of failover can be hard to do from SQL; some things are nice to expose via SQL functions. This is one of those things.
Is this safe?
Hell, no. Under no circumstances should you use this extension on a production cluster; this is purely for testing things out in a development environment.
We require you to set a GUC variable pg_kaboom.disclaimer
to a magic value in order for any of these functions to do anything. That said, there are often times where simulating different breakage scenarios are useful. Under no way are we liable for anything you do with this software. This is provided without warranty and complete disclaimer.
This is your final warning! You will lose data!
Installation
console
$ git clone git@github.com:CrunchyData/pg_kaboom.git
$ cd pg_kaboom
$ make PG_CONFIG=path/to/pg_config && make install PG_CONFIG=path/to/pg_config
$ psql -c 'CREATE EXTENSION pg_kaboom' -U <user> -d <database>
Usage
Once this extension is installed in the database you wish to ~~destroy~~ use, you will just need to run the function pg_kaboom(text)
with the given weapon of breakage.
That said, we want to make sure that you are really sure you want to do these destructive operations. You should never install this extension on a production server. And you are required to issue the following per-session statement in order to do anything with this extension:
```sql SET pg_kaboom.disclaimer = 'I can afford to lose this data and server'; SET pg_kaboom.execute = on; -- required for shell command-based running; additional safety value. Not all weapons respect this. SELECT pg_kaboom('segfault');
-- backend segfaults, exeunt ```
Available Weapons
Currently defined weapons (more to come) are:
break-archive
:: install a brokenarchive_command
and force a restartfill-log
:: allocate all of the space inside the logs directoryfill-pgdata
:: allocate all of the space inside the $PGDATA directoryfill-pgwal
:: allocate all of the space inside the $PGDATA/pg_wal directorymem
:: allocate some memoryrestart
:: do an immediate restart of the serverrm-pgdata
:: do arm -Rf $PGDATA
segfault
:: cause a segfault in the running backend processsignal
:: send aSIGKILL
to the Postmaster processxact-wrap
:: force the database to run an xact-wraparound vacuum
You can also use the following "special" weapons:
random
:: choose a random weaponnull
:: don't do anything, just go through the normal flow
Contributions welcome! Let's get creative in testing how PostgreSQL can recover/respond to various systems meddling!
Author
David Christensen david.christensen@crunchydata.com, david@pgguru.net