0

I have a Btrfs root partition with an @ root subvolume and an @home subvolume and I do auto-snapshots during updates and timeshift scheduled snapshots, both of which are saved on the same drive. This is great, but I want to have extra redundancy in case of a drive failure.

In my last setup on Debian, I used the ext4 file system and put my timeshift rsync backups on an external drive.

How can I do something similar, i.e. backup to an external drive, while still taking snapshots on the root device?

In addition to the system device, which is a 1 TB SSD formatted as Btrfs, I have a 2 TB HDD currently formatted with two NTFS partitions since I dual boot Windows as well. Now, I would be willing to completely move to a Linux file system on that drive, but I don't know how I would handle backing up the root drive. I thought about doing a disk image onto the HDD with dd but if I do this, I would (a) loose an extra TB of storage if I understand correctly how dd works and (b) would not know how to restore from the image. Ideally, I would like to have a Btrfs partition on the second drive for backups of the root device only and a second (e.g. ext4 or NTFS) partition just for overflow data storage.

Essentially, my question is: How can I facilitate a backup of my already "snapshotting" root partition (and also know how to restore from it)?

1

3 Answers 3

0

This is the solution I come up with for this kind of requirement: https://github.com/ceremcem/smith-sync

I prepared my own scripts to backup to a target: https://github.com/ceremcem/smith-sync-new-target

Here is the overall backup system I'm using: https://github.com/ceremcem/erik-sync

This process is quite a bit complex and requires a deep knowledge. In short, this script does the followings:

  • Defer any automatic suspend actions in order not to interrupt the backup process.
  • Send all snapshots to the target.
  • Create a new rootfs within the target by using the latest snapshots.
  • Modify necessary files such as $target/rootfs/etc/fstab, .../etc/crypttab, etc. accordingly.
  • Mount the boot partition, copy the most recent boot files into it.
  • Update GRUB bootloader in a chroot environment.

Now our target disk (backup disk) is the exact copy of our current system, without using dd command. You can create any additional partitions in the target disk according to your needs, this will not break this backup process.

1
  • Thank you for your explanation! This setup is probably overkill for my needs so I used btrbk, which just sends the snapshots to a second drive. As far as I understand, this is only useful in cases of software errors, right? In case of a hardware error, could I install the snapshots on a new drive? Commented Mar 7, 2023 at 15:47
0

Moved from edit-to-question:

The solution I came up with uses btrbk, which is a single perl script that manages snapshots and automatically sends them to another drive. I documented the setup on my github account here.

btrbk runs using a single config file to specify snapshot and backup locations as well as retention policies. I schedule the backup process using crontab. It is a little work to set it up for beginners but much more rewarding in the learning process than simply using snapper or timeshift.

0

I have just changed from the previous LVM/rsync to BTRFS using three small partitions for EFI, boot (ext4) and swap; these are quite stable in size, and a large partiton 1.7TB with btrfs for the system.

A second disk of the same size is in the same enclosure for normal backup; this is bootable so that I can continue working with the state of the last backup. To restore files, the backup drive is just mounted and searched with the usual environment.

The BTRFS has two subvolumes (plus snapshots), @root and @home on top level. A small script just mirrors the root and home file systems to the second disk using btrfs send -p ... ... | btrfs receive .... Done normally after logon, when the system is still quiet.

After the backup, booting the second disk is nearly indistinguishable from the normal state; need to call lsblk -f to see which drive is mounted.

Work fine so far, even if theoretically snapshotting in the running system might lead to errors. These are hopefully repaired by the next backup.

I will leave only a few snapshots on the primary drive, the corresponding ones on the secondary for restore. On the backup drive, snapshots of /home will be more (thinned out in time), as file recovery, possibly moved to an archive drive, perhaps with rsync.

To establish this, I first copied in both directions on a virtual machine, then copied system and home via rsync to the new backup drive with btrfs, booting from there, reformatting the primary drive with btrfs and copying all data from the backup to the primary. So I did serveral times copy / between drives, and the system always did run smothly.

Restoring depends on the reason; if the primary disk fails, boot from the backup and work there until the new disk has arrived, create the same structure there and copy all data, then switch to the primary disk. If the actual version in the backup is not ok, you need a live disk (GRML).

From my latest experience, just copying over the root file system works fine. (Most effort was fighting with grub).

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.