-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch default pool from LVM to BTRFS-Reflink #6476
Comments
It might be a good idea to compare performance (seq read, rand read, allocation, overwrite, discard) between the three backends. See: #3639 |
With regard to VM boot time, LVM storage pool was slightly faster than BTRFS, but this may be still within the margin of error (LVM: 7.43 s versus BTRFS: 8.15 s for starting a debian-10-minimal VM). |
Marking as RFC because this is by no means finalized. |
@DemiMarie following comment I'm posting deconstructed thoughts here. No problem with QubesOS searching the best FS to switch for on 4.1 release, and questioning partition scheme, but i'm a bit lost on the direction of QubesOS 4.1 and the goals here (stability? performance? backups? portability? security?) I was kind of against having dom0 having seperate LVM pool for space constrains resulting of the change, but agreed and accepted that the pool metadata exhaustion possibility was a real tangible issue that hit me a lot before, for which resolution is sketchy and still not advertised in widget correctly for users simply upgrading and being hit with. The fix in new install resolved the issue, while QubesOS decided to split the dom0 pool out of main pool, so fixing pool issues on the system would be more easy for the end user or non existent. I am just not so sure why switching filesystem is on point now, where LVM thin provisioning seems to fit the goal, but willing to hear more about the advantages. I am interested into the reasoning for such a switch, and the probabilities of doing so, since I am really interested into pushing wyng-backups farther, inside/outside of Heads inside/outside of QubesOS, of grant/self funding the work so that QubesOS metadata would be included in wyng-backups, permitting restore/verification/fresh deployment/revert from local(oem recovery VM)/remote source, just applying diff where required from ssh remote red only mountpoint. This filesystem choice seems to be less relevant then what can make those changes consume dom0 LVM which should be excludedof dom0 so that dmverity can be setuped under Heads/Safeboot. But this is irrelevant to this ticket. |
The advantages are listed above. In short, a BTRFS pool is more flexible, and it offers possibilities (such as whole-system snapshots) that I do not believe are possible with LVM thin provisioning. BTRFS also offers flexible quotas, and can always recover from out of space conditions provided that a small amount of additional storage (such as a spare partition set aside for the purpose) is available. Furthermore, BTRFS checksumming and scrubbing appear to be useful. Finally, new storage can be added to and removed from a BTRFS pool at any time, and the pool can be shrunk as well. BTRFS also has disadvantages: its throughput is worse than LVM, and there are reports of bad performance on I/O heavy workloads such as QubesOS. Benchmarks and user feedback will be needed to determine which is better, which is why this is an RFC.
I believe that |
@DemiMarie There are many questions swirling around advanced storage on Linux, but I think the main ones applicable here are about reliability and performance. Btrfs and Thin LVM appear to offer trade-offs on those qualities, and I don't think its necessarily a good move to switch the Qubes default for a slower storage scheme at this point; storage speed is critical for Qubes' usability and large disk image files with random write patterns are Btrfs' weakest point. Running out of space is probably Thin LVM's weakest point, although this can be pretty easily avoided. For one, dom0 root is moving to a dedicated pool in R4.1, which will keep admin working in most situations. Adding more protections to the domU pool can also be done with some pretty simple userland code. (For those who are skeptical, note that this is the general approach taken by Stratis.) The above mentioned Btrfs checksums is a nice-to-have feature against accidental damage, but it unfortunately does not come close to providing authentication. To my knowledge, no CRC mode can do that even if its encrypted. Any attacker able to induce some calculated change in an encrypted volume would probably find the malleability of encrypted CRCs to be little or no obstacle. IMHO, the authentication aspect of the proposal is a non-starter. (BTW, it looks like dm-integrity may be able to do this now along with As for backups, Wyng basically exists because tools like The storage field also continues to evolve in interesting ways: Red Hat is creating Stratis while hardware manufacturers implemented NVMe objects and enhanced parallelism. Stratis appears to be based on none other than Thin LVM's main components (dm-thin, etc) in addition to dm-integrity, with XFS on top; all the layers are tied together to respond cohesively from a single management interface. This is being developed to avoid Btrfs maintenance and performance pitfalls. I think some examination of Btrfs development culture may also be in order, as it has driven Red Hat to exasperation and a decision to drop Btrfs. I'm not sure just what it is about accepting Btrfs patches that presents a problem, but it makes me concerned that too much trust has been eroded and that Btrfs may become a casualty in 'storage wars' between an IBM / Red Hat camp and what I'd call an Oracle-centric camp. FWIW, I was one of the first users to show how Qubes could take advantage of Btrfs reflinks for cloning and to request specific reflink support. Back in 2014, it was easy to assume Btrfs shortcomings would be addressed fairly soon, since those issues were so obvious. Yet they are still unresolved today. My advice at this point is to wait and see – and experiment. There is an unfortunate dearth of comparison tests configured in a way that makes sense; they usually compare Btrfs to bare Ext4, for example, and almost always overlook LVM thin pools. So its mostly apples vs oranges. However, what little benchmarking I've seen of thin LVM suggests a performance advantage vs Btrfs that would be too large to ignore. There are also Btrfs modes of use we should explore, such as any performance gain from disabling CoW on disk images; if this were deemed desirable then the Qubes Btrfs driver would have to be refactored to use subvolume snapshots instead of reflinks. An XFS reflink comparison on Qubes would also be very interesting! |
In retrospect, I agree. That said (as you yourself mention below) XFS also supports reflinks and lacks this problem.
Will it be possible to reserve space for use by discards? A user needs to be able to free up space even if they make a mistake and let the pool fill up.
The way XTS works is that any change (by an attacker who does not have the key) will completely scramble a 128-bit block; my understanding is that a CRC32 with a scrambled block will only pass with probability 2⁻³². That said, BTRFS also supports Blake2b and SHA256, which would be better choices.
Good to know, thanks!
My understanding (which admittedly comes from a comment on Y Combinator) is that BTRFS moves too fast to be used in RHEL. RHEL is stuck on one kernel for an entire release, and rebasing BTRFS every release became too difficult, especially since Red Hat has no BTRFS developers.
That it would be, especially when combined with Stratis. The other major problem with LVM2 (and possibly dm-thin) seems to be snapshot and discard speeds; I expect XFS reflinks to mitigate most of those problems. |
Ah, new Btrfs feature... Great! I'd consider enabling one of its hashing modes as being able to support authentication. I'd still consider the Stratis concept to be more interesting for now, as Qubes' current volume management is pretty similar but potentially even better and simpler due to having a privileged VM environment. |
Agreed. While I am not aware of any way to tamper with a LUKS partition without invalidating a CRC, Blake2b is by far the better choice.
I agree, with one caveat: my understanding is that LUKS/AES-XTS-512 + BTRFS/Blake2b-256 is sufficient to protect against even malicious block devices, whereas |
@tasket: what are your thoughts on using loop devices? That’s my biggest worry regarding XFS+reflinks, which seems to otherwise be a very good choice for QubesOS. Other approaches exist, of course; for instance, we could modify |
I really wish the FS's name wasn't a misogynistic slur. That aside, my only experience with it, under 4.0, had my Qubes installation become unbootable, and I found it very difficult to fix, relative to a system built on LVM. And that does strike as relevant to the question whether Qubes switches, while imo this is only partly addressable via improving the documentation (since the other part is the software we have to use to restore). |
@0spinboson would you mind clarifying which filesystem you are referring to? |
Yes, its simple to allocate some space in a pool using a non-zero thin lv. Just reserve the lv name in the system, make it inactive, and check that it exists on startup. Further, it would be easy to use existing space-monitoring components to also pause any VMs associated with a nearly-full pool and then show an alert dialog to the user.
I thought the journal mode would prevent that? I don't know it in detail, but something like a hash of the hashes of the last changed blocks, computed with the prior journal entry, would have to be in each journal entry.
I forgot they were a factor... its been so long since I've used Qubes in a file-backed mode. But this should be the same for Btrfs, I think. FWIW, the XFS reflink suggestion was more speculative, along the lines of "What if we benchmark it for accessing disk images and its almost as fast as thin LVM?". The regular XFS vs Ext4 benchmarks I'm seeing suggest it might be possible. Its also not aligned with the Stratis concept, as that is closer to thin LVM with XFS just providing the top layer. (Obviously we can't use Stratis itself unless it supports a mode that accounts for the top layer being controlled by domUs.) Also FWIW: XFS historically supported a 'subvolume' feature for accessing disk image files, instead of loopdevs. It requires certain IO sched conditions are met before it can be enabled. |
'Butterface', was intentional, afaik. |
No, it was not. The file system is named |
Basic question: If I install R4.1 with BTRFS by selecting custom, and then using Anaconda to automatically create the Qubes partitions with BTRFS, is that sufficient for the default pool to use BTRFS-Reflink? Or do I have to do something extra for the "Reflink" part? |
Yes |
Ext4 has metadata checksums enabled since e2fsprogs 1.43, so at least some filesystem integrity checking is happening inside VMs:
Does Qubes have mechanisms to report kernel errors from VMs and dom0 to the user, via toast notifications or so? In Qubes 4.2.1, 4.2.2 dom0 systemd journal continuously gets repeated PAM error messages :-/
|
@DemiMarie tasket/wyng-backup#211 With proper settings, I confirm btrfs to be way better performance wise then lvm2 with large qubes, clones+specialization (qusal used), where my tests of beesd have stopped momentarily by lack of time. |
@DemiMarie @marek @tasket unfortunately I don't. Notes are scattered under bees and wyng-backup issues for the moment which is how I optimized my btrfs setup and would never look back to thinlvm ever again, until zfs is figured out to be added under installer. But prior of being to do a proper perf comparison, defaults of btrfs filesystem creation and fstab mount options need revisiting, including détection of Block device type being HDD/(ssd/nvme). Incomplete notes out of my head:
Will try to revisit scattered issues and post them here with further edit when I have a bit more time to invest in this issue. Collaboration needed. Or at least point this comments to those comments. |
Very unlikely (unless available in upstream kernel, which is also very unlikely). We have CI job that checks if ZFS pool works, and quite often we find that it doesn't work with the latest kernel yet. So official ZFS support would hold back kernel updates, which I heard from @DemiMarie is completely unacceptable if doing any sort of GPU acceleration work (which we do want at some point). Currently the said CI job doesn't work, because there is no dkms package for Fedora 41 (which R4.3 is based on) yet. Back to topic:
Is there any impact on resilience for power failure cases?
What do you mean? Can you collect specific options that need to be set (fstab and elsewhere)? |
For clarification: I am not sure if Qubes OS will use the latest stable kernel or the latest LTS kernel, and if Qubes OS will skip the first few releases in a stable branch. If Qubes OS will skip the first few releases (as seems likely, since these releases often have easily-found bugs), this might give ZFS sufficient time to catch up. What must be taken regularly for GPU acceleration are weekly updates in the upstream branch that Qubes OS has chosen to follow. I would be highly surprised if those break OpenZFS, though there are obviously no guarantees. |
Not that i'm aware of; DUP is for HDD (and general failsafe mechanism where QubesOS volatile/snapshot rotation + reflink overloads those offer, where ssd/nvme does its own thing in firmware and where BRTFS is pretty atomic anyway. Also note that for wyng-backups (my goal), I disabled volumes to keep globally as well, so impacts on IO performance (I rely on single snapshots from wyng last backup) are widely non-observable on my side because of that (otherwise volatile volume of root+private + volumes to keep and rotation of snapshots was first observable drawnback of using BRTFS in current setup without any optimizations from current default from installer defaults (which are good at first then performance penalties gets heavier the more rootfs clones (templates) having volumes to keep + appvms private volumes cloned + reflinked to the point of if one uses qusal, it's just not fun even on newer hardware with nvme so I would not advise testing this on ivybridge without one wanting to throw laptop at the window with end user thinking QubesOS is just a crappy OS.
@marmarek @tasket current fstab:
One would avocate that system should be in DUP for resilience, but I have observed no impact. Definitely, data should be single, metadata could be doubled in size/be single and system should stay DUP for reliability reasons. Todo: detect block device type and deviate from default needs to happen, to what: that is the question.
A reminder: those points of research were addressed in my joint grant application plan for wyng-backup under #858 (comment) |
I'm intrigued but skeptical that
For mount options:
Automatically applied if the drive's
Default on modern kernels, but unfortunately it's overridden to
Default |
Mine was dup on default install.
By lack of proper benchmarking, I guess the culprit to fix here is then noautodefrag and redo testing |
It's unfortunate to watch so many cycles being spent to deal with issues that wouldn't even merit mention with ZFS. Inability to use bleeding-edge kernel releases seems a bit disingenous as a reason to disqualify ZFS given how far back from the edge Qubes (rightfully!) stays. Regarding distributing binaries (in the installer or otherwise), DKMS seems like it would be one good solution. |
Odd. Does your filesystem span multiple block devices? Even then I don't see why that would result in
Just not using the |
I tend to agree with that statement. Mixing dgpu pass-through with pool efficient management (online dedup being a big win for my use case, vs not caring at all with all graphic acceleration) woukd resolve most of my issues and time spent with bees which will keep doing of file dedup and consume CPU cycles I would prefer not spending. Zfs > btrfs on all levels, once again. |
No. Single disk, 2 luks, one rootfs where /var/lib/qubes is btrfs reflink pool.
My bad. Not autodefrag but discard=async Crossref Zygo/bees#283 (comment) |
I don't want Qubes users to be in this situation: openzfs/zfs#16590 (comment) (6.11 kernel is in Qubes stable repo for 2+ weeks already). |
To elaborate: Users who turn on GPU acceleration will need to update their kernel weekly to the latest release on their branch of choice. For those who are using |
I completely agree, and I don't want them to either. That said, this was caused by a user adding a dependency outside of the project's control upon which the base system relied. If Qubes included it instead, presumably you'd hold back the kernel until there was a compatible ZFS version, just like you do for other Qubes-specific dependencies. The fact that users are going to such lengths to use ZFS shows there is demand for it (and @Rudd-O has already gone to great effort to lay some of the groundwork). What ZFS brings to the table meshes very well with how Qubes aims to achieve its goals. I still cannot understand why we are dismissing it out of hand. (Its pedigree is also far superior to that of Btrfs.) |
Yet another reason to stay back from the bleeding edge given the security-above-all ethos of Qubes. And if we are holding the kernel back for any other reason, we can also hold it back to ensure compatibility with ZFS. |
Holding back kernel updates is 100% incompatible with GPU acceleration, because Linux does not reliably issue security advisories for GPU driver vulnerabilities and so Qubes OS’s security team does not know which patches need to be backported. Qubes OS will be offering GPU acceleration in the future because there are many users who simply cannot use Qubes OS without it. Therefore, holding back kernel updates is not a sustainable solution. It might be possible to provide a ZFS DKMS package that only supported LTS kernels, which is what Qubes OS ships by default. However, there are users who must use
There is absolutely demand for ZFS, and for very good reason: ZFS is the most reliable filesystem available today, on any operating system. To be clear, if I had a production server using an LTS kernel I would most likely choose ZFS for data volumes (though probably not the root volume unless I was on Ubuntu). Qubes OS, however, is not a server operating system, and cannot rely on LTS kernels because it needs
ZFS is not being dismissed out of hand. There are, however, multiple severe problems with it:
Footnotes
|
Then why throw the baby out with the bathwater? While licensing, secure boot, etc are figured out, those who need |
People can already use ZFS on Qubes OS, it isn't even that complicated to enable. But due to the above issues, it won't be part of the default installation. |
Is there still desire to make btrfs first candidate of qubesos and fixing default fs/fstab options? |
The latter part can be done outside of this ticket, because it affects Btrfs users even while Btrfs isn't the default installation layout. (If you find a way to reproduce the installer bizarrely setting up dup data on your single disk setup, please open an issue! Otherwise, the only thing I'm aware of that needs some tweaking in the installer and in qubes-dist-upgrade is not to hardcode |
I confirm that on fresh installation:
Agreed. Will apply this on test laptop (w530, quad core no HT, SSD Samsung 860 1TB) and see if adding |
Someone can launch perf comparison with that fstab setting changed vs default on large disks and with multiple clones + snapshots of vm startup time? Total boot time to ready to work state? @marmarek @DemiMarie that would help getting this ticket moving with single change perf diff. See also https://forum.qubes-os.org/t/btrfs-and-qubes-os/6967/54 |
Can you do that? |
I thought this is what we waited to have to redo test bench, with newer kernel version etc. No, I do not have test bench to produce tests results that would have the desired impact. @marmarek I suggest redoing tests as of last time for comparison, whatever they were, without synced discards in fstab. Maybe that could be part of 4.3 feature freeze. My setup is stable with that. Default install is wrong with it. Need numbers to prove it. |
At #6476 (comment) last test results @marmarek said
|
On older hardware (Crucial MX500 SATA SSD) I even mount btrfs rootfs in dom0 with When I was mounting Build times for our product have improved, but I don't have apples-to-apples benchmarks. Seat-of-the-pants feeling is that I have better responsiveness with |
The problem you're addressing (if any)
In R4.0, the default install uses LVM thin pools. However, LVM appears to be optimized for servers, which results in several shortcomings:
Additionally, LVM thin pools do not support checksums. This can be achieved via dm-integrity, but that does not support TRIM.
Describe the solution you'd like
I propose that R4.3 use BTRFS+reflinks by default. This is a proposal ― it is by no means finalized.
Where is the value to a user, and who might that user be?
BTRFS has checksums by default, and has full support for TRIM. It is also possible to shrink a BTRFS pool without a full backup+restore. BTRFS does not slow down system startup and shutdown, and does not corrupt data if metadata space is exhausted.
When combined with LUKS, BTRFS checksumming provides authentication: it is not possible to tamper with the on-disk data (except by rolling back to a previous version) without invalidating the checksum. Therefore, this is a first step towards untrusted storage domains. Furthermore, BTRFS is the default in Fedora 33 and openSUSE.
Finally, with BTRFS, VM images are just ordinary disk files, and the storage pool the same as the dom0 filesystem. This means that issues like #6297 are impossible.
Describe alternatives you've considered
None that are currently practical. bcachefs and ZFS are long-term potential alternatives, but the latter would need to be distributed as source and the former is not production-ready yet.
Additional context
I have had to recover manually from LVM thin pool problems (failure to activate, IIRC) on more than one occasion. Additionally, the only supported interface to LVM is the CLI, which is rather clumsy. The LVM pool requires nearly twice the amount of code as the BTRFS pool, for example.
Relevant documentation you've consulted
man lvm
Related, non-duplicate issues
#5053
#6297
#6184
#3244 (really a kernel bug)
#5826
#3230 ― since reflink files are ordinary disk files we could just rename them without needing a copy
#3964
everything in https://github.com/QubesOS/qubes-issues/search?q=lvm+thin+pool&state=open&type=issues
Most recent benchmarks: #6476 (comment)
The text was updated successfully, but these errors were encountered: