Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LVM backup restoration fails under BRTFS pool #203

Closed
tlaurion opened this issue May 31, 2024 · 7 comments
Closed

LVM backup restoration fails under BRTFS pool #203

tlaurion opened this issue May 31, 2024 · 7 comments

Comments

@tlaurion
Copy link
Contributor

This is

sudo wyng version
Wyng 0.8 beta release 20240528

And wyng-util-qubes from 09beta.


Some trace extract:

sudo wyng-util-qubes --dest qubes-ssh://wyng-WRT3200ACM_raid5:root@Insurgo-WRT3200ACM/mnt/Backups/nv41 --authmin 1080 -w debug --debug restore qusal -u
	wyng-util-qubes v0.9 beta rel 20240530
	['/usr/sbin/wyng', '--dest=qubes-ssh://wyng-WRT3200ACM_raid5:root@Insurgo-WRT3200ACM/mnt/Backups/nv41', '-u', '--authmin=1080', '--debug', '--json', 'list']
	['/usr/sbin/wyng', '--dest=qubes-ssh://wyng-WRT3200ACM_raid5:root@Insurgo-WRT3200ACM/mnt/Backups/nv41', '-u', '--authmin=1080', '--debug', '-u', '--quiet', '--save-to=/tmp/wuqibythqw8/qmeta.tgz', '--session=20240531-163158', 'receive', 'wyng-qubes-metadata']
	['tar', '-xzf', '/tmp/wuqibythqw8/qmeta.tgz']

	VMs matched in session 20240531-163158:
	 qusal

	Restoring VM data volumes:
	Wyng 0.8 beta release 20240528
	Fetched archive.ini 5847
	Fetched archive.salt 424
	Fetched salt.bak 424
	metadata cipher = xchacha20-poly1305-msr
	[var]
	uuid = ce8a1804-a0f9-408f-94bf-989552462c48
	updated_at = 1717188583.8891625
	format_ver = 3
	chunksize = 131072
	compression = zstd
	compr_level = 7
	hashtype = hmac-sha256
	ci_mode = 35
	dataci_count = 4459606
	mci_count = 93475
alias vm-qusal-private = appvms/qusal/private.img
Local storage is offline: '/var/lib/qubes' is not a subvolume.
**fstype is btrfs
**pooltype rlnk not online
/var/lib/qubes None
Encrypted archive 'qubes-ssh://wyng-WRT3200ACM_raid5:root@Insurgo-WRT3200ACM/mnt/Backups/nv41' 
Last updated 2024-05-31 16:49:43.889163 (-04:00)

 Offline volumes: vm-qusal-private
JSON:
 {"/var/lib/qubes": [["vm-qusal-private", "appvms/qusal/private.img"]]}
['/usr/sbin/wyng', '--dest=qubes-ssh://wyng-WRT3200ACM_raid5:root@Insurgo-WRT3200ACM/mnt/Backups/nv41', '-u', '--authmin=1080', '--debug', '--force', '--sparse-write', '--local-from=/tmp/wuqibythqw8/wuq_vols.lst', '--session=20240531-163158', 'receive']
Traceback (most recent call last):
  File "/usr/sbin/wyng-util-qubes", line 409, in <module>
    handle_wyng_vol_error(p)
  File "/usr/sbin/wyng-util-qubes", line 162, in handle_wyng_vol_error
    errln = [x for x in text.splitlines() if x.startswith("Error on volumes:")][0]
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
@tasket
Copy link
Owner

tasket commented May 31, 2024

@tlaurion What this looks like is that /var/lib/qubes was not setup as a subvolume, and the util mis-parsed the error message.

@tasket
Copy link
Owner

tasket commented May 31, 2024

@tlaurion You are new to Btrfs so may want to look into how many pools the Qubes installer created with qvm-pool. Also check which one is default with qubes-prefs default_pool (you can use it to change the default). Possibly '/var/lib/qubes' is not where you want to be restoring VMs?

@tasket
Copy link
Owner

tasket commented Jun 1, 2024

@tlaurion Let me know if you need guidance on converting the Qubes pool into a subvolume. I am going to relax this requirement for receive/restore, but it will still be necessary for send/backup unless someone thinks of an efficient workaround.

@tlaurion
Copy link
Contributor Author

tlaurion commented Jun 1, 2024

@tasket that testing laptop had bees installed before restoring. Maybe that changes the subvolume volume setup as opposed to the installer defaults for btrfs over q4.2.1? I will be able to test Tuesday on non-bees setup laptop replicate of bees setup, with only difference being bees deployed configured and running.

Self build rpm for fedora-37 rpm at QubesOS/qubes-issues#6476 (comment)

Discussion under https://forum.qubes-os.org/t/bees-and-brtfs-deduplication/20526

Qubes-builderv2 poc (non-working yet) at https://github.com/tlaurion/qubes-bees where spec file there was used to build rpm manually.

@tlaurion
Copy link
Contributor Author

tlaurion commented Jun 3, 2024

@tlaurion Let me know if you need guidance on converting the Qubes pool into a subvolume. I am going to relax this requirement for receive/restore, but it will still be necessary for send/backup unless someone thinks of an efficient workaround.

@tasket does wyng requires anything to be done on top of q4.2.1 default installer option other then choosing btrfs as partition scheme?

I will deferenciate bees modified btrfs subvolume options vs standard install of btrfs but yeah, I'm new to btrfs.

I just need to understand how to setup things to have things compatible with bees so that on receive/restore ops, bees dedup on the fly block volumes and if I understand well, bees only works on subvolume so I guess bees modified something and I missed the details because unaware.

@tasket
Copy link
Owner

tasket commented Jun 3, 2024

@tlaurion Yes, whichever path is being used by the Qubes reflink pool (possibly /var/lib/qubes) needs to be converted into its own subvolume (instead of just being a plain dir on a Btrfs filesystem) for send and monitor to work. I think I posted a shell script to do that somewhere but I don't remember now. There is a good script in this Qubes forum post; the main thing I'd do different is to also stop the 'qubesd' service right after the qvm-shutdown step, then start it again at the end.

@tasket
Copy link
Owner

tasket commented Jun 3, 2024

BTW, receive should no longer have any issue with restoring to a non-subvolume path or a path on a non-CoW storage system (so non-subvol, or Ext4 or whatever is now OK). Maybe I'll even get receive working with non-Linux systems.

@tlaurion tlaurion closed this as completed Jun 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants