OK, first on a well managed system you would never need to do this. Despite the fact my system I thought was well managed and had 250gb free in the backup storage pool rotating around quite happily when one of my servers suddenly needed over 500Gb of backup space; oops 100% full backup filesystem. A windy day and the motion activated webcam was going wild from a tree waving about.
The normal activity of pruning jobs could not resolve the issue as multiple servers were using the backup volumes so despite extensive pruning no volumes became free to release space. My only option was to start with a fresh bacula environment.
From previous experience I knew that simply recreating the SQL tables did not create a working environment as flat files (non database files but filesystem files) are also used and a non-working state is achieved by not cleaning up everything. And I discovered I had not documented how I resolved it so many years ago; so for my future reference this is what is needed.
It should be an absolute last resort, as all existing backups are lost. Even if you use multiple storage daemons across multiple servers all backups are lost, re-initialising the database means you must delete all backup volumes across all your distributed storage servers associated with the database used by the bacula director affected.
This post is primarily for my use, as while I hope I will not need it again, I probably will, if only for moving existing servers to new storage pools.
Anyway, out with the sledge hammer; how to start from scratch.
On your bacula director server
systemctl stop bacula-dir
On all storage servers used by that director
systemctl stop bacula-sd cd /your/storage/pool/dir rm *
On your bacula director server
# Just drop and recreate the tables. Do not delete the database, # leaving the database itself in existence means all user/grants # for the database remain valid. cd /usr/libexec/bacula ./drop_mysql_tables -u root -p mysql -u root -p use bacula; list tables; drop table xxx; # for each table remaining \q ./make_mysql_tables -u root -p # remove everything from /var/spool/bacula, note I do not delete everything as # I have a custom script in that directory for catalog backups; so this deletes # everything apart from that script. # Failure to do this step results in errors as backup details are stored # here, which would conflict with an empty database. cd /var/spool/bacula /bin/rm *bsr /bin/rm *state /bin/rm *conmsg /bin/rm log*
On all of your storage servers
systemctl start bacula-sd
On you bacula director server… after editing the exclude lists to exclude whatever directory cause the blowout
systemctl start bacula-dir
At this point all is well, your scheduled backups will be able to run again. The issue of course is that the first time they are next scheduled to run all incremental backups will now say something like ‘no full backup found, doing a full backup’, which while exactly what you want means your backups will take a lot longer than expected on their next run, plus if you had been staggering your full backups across multiple days bad news as they will now (assuming they all have the same retention period) all want to run on the same day in future.
Use of the ‘bconsole’ interface over a few days to delete/prune and rerun full backups can get the staggering back but it is a bit of a pain.
It is the damb incremental backups that use all the space; the full backups of all my servers only used 22% of the storage pool.
Ideally due to space limitations I should revise my monthly full backup strategy to fortnightly so I can keep two weeks of incrementals rather than four weeks of them; and hope I never need to restore a file over two weeks old. However in the real world if a personal physical server dies it may take over two weeks to be repaired and slotted back in so for now I’ll stick with monthly.