All my services are in podman containers/pods managed by systemd.
All services are PartOf= a custom containers.target.
All data is stored on btrfs sub volumes
I created a systemd service that Conflicts=containers.target for creating read only snapshots of the relevant subvolumes.
That service Wants=borgmatic.service wich creates a borg backup of the snapshots on a removable drive. It also starts containers.target on success or failure since the containers are not required to be stopped anymore.
After borg backup is done, the repository gets rclone synced to an S3 compatible storage.
This happens daily, though I might put the sync to S3 on a different schedule, depending how much bandwidth subsequent syncs will consume.
What I’m not super happy about is the starting of containers.target via the systemd unit’s OnSuccess= mechanism but I couldn’t find an elegant way of stopping the target while the snapshots were being created and then restarting the target through the other dependency mechanisms.
I also realize it’s a bit fragile, since subsequent backup steps are started even if previous steps fail. But in the worst case that should just lead to either no data being written (if the mount is missing) or backing up the same data twice (not a problem due to deduplication).
I’ve finally pinned down my backup automaton:
PartOf=a customcontainers.target.Conflicts=containers.targetfor creating read only snapshots of the relevant subvolumes.Wants=borgmatic.servicewich creates a borg backup of the snapshots on a removable drive. It also startscontainers.targeton success or failure since the containers are not required to be stopped anymore.rclone synced to an S3 compatible storage.What I’m not super happy about is the starting of
containers.targetvia the systemd unit’sOnSuccess=mechanism but I couldn’t find an elegant way of stopping the target while the snapshots were being created and then restarting the target through the other dependency mechanisms.I also realize it’s a bit fragile, since subsequent backup steps are started even if previous steps fail. But in the worst case that should just lead to either no data being written (if the mount is missing) or backing up the same data twice (not a problem due to deduplication).