-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Capacity control on volume is not supported #198
Comments
Interesting question. First off, how are Swift Accounts expected to be created? The A second issue would be that quota settings would likely want to be adjustable, right? An associated question then arises of what happens if the new quota setting is already exceeded? Finally, the ProxyFS code currently has very poor handling of the "device full" condition. Presumably, storage personal are monitoring their Swift Cluster to avoid such an exhaustion condition, but it certainly happens in testing scenarios. This aspect of ProxyFS needs to be significantly hardened. Such hardening will of course then apply to quota exhaustion. |
This is just rough idea, I know. I didn't have any working code for now. The use case I imagine is the users can claim their provisioned volumes like as Kubernetes' persistent volume claims. [1] The basic idea for now is And typically expand/shrink the volume size would cause hard problems so that it might be ok, just creating the concrete sized volume, then the size is immutable as the first step.
Good to know, if you have any point, the information, which API requests I should care, will help me on my dev work. 1: https://kubernetes.io/docs/concepts/storage/persistent-volumes/ |
Interesting that k8s actually provisions only persistent block storage...applying a file system format...rather than (perhaps in addition) providing NAS. Makes life simpler for them I am sure. ProxyFS fortunately avoids the traditional file system format/reformat issues completely 😁. All we do when formatting is create the .checkpoint container. There is nothing like a "free block/sector map" nor a "free inode list" to have to worry about the underlying storage changing size. That said, lowering the quota below the currently consumed space would be a challenge to handle. |
Hmm... I'm not yet the master of k8s PV. I just thought some external FS can be available looking at the provisioner list, https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner, but ProxyFS is not in. I was wondering if I could contribute to write up some code to make it there.
Exactly, it's hard problem. Thinking of the proxyfs model, a volume tied up to an account of Swift, it would work that provisioning a persistent volume claim makes a new proxyfs volume, isn't it? That's just a rough idea, obviously I need catch up how they works on both k8s and proxyfs with the quota, tho. |
I'd love to see k8s PV list (https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner) specifically include ProxyFS :-). While it's not there, I would think NFS would certainly suffice. It would be very cool to see something like Samba VFS/nfs-ganesha FSAL/Swift pfs_middleware -> jrpcclient -> proxyfsd provide even tighter integration. Is this what you are thinking about @bloodeagle40234 ? Also, about "auto-provisioning". This is a tedious one for sure. At this point, the method required to add a new volume is to:
Lots of moving parts... but the above steps are kind of it. I'm really interested to hear more about quota limitation enforcement. Perhaps we could let Swift do it all... but that's not a requirement. We could do something inside ProxyFS itself. One issue we have with ProxyFS currently is that all volumes supported by a ProxyFS instance run in a single process. Today, if any one of those volumes runs into a problem with Swift (e.g. running out of quota), the entire ProxyFS instance will fail. A big TODO is to at least isolate failures in volumes from their impact on other volumes. Additionally, it would be much "friendlier" to not actually fail when e.g. running out of Swift quota but, rather, go into, say, a Read/Delete-only mode until such time as the quota requirement is met. Finally, if you've looked at the SnapShot stuff, there currently is no obvious way to know how much space will be free'd up if a SnapShot is deleted. That might be what one wants to do when running out of quota for instance. Anyway, there is some rather simple work we could do to make SnapShot size entirely trackable: the LogSegment table in the headhunter package today only maps LogSegmentNumber to ContainerName. It could also contain the ObjectSize. Each SnapShot record in the checkpoint enumerates the LogSegmentNumbers that will be deletable if the SnapShot were deleted...hence a simple sum over those records from the LogSegment table in headhunter would answer that question. Love it if somebody would pick up that work :-). |
Perhaps I might be missing the conf but it seems like ProxyFS doesn't support capacity control on each volume. To share the volumes to working groups, the capacity control (calling account-quote in the Swift term) would be useful to limit the total volume size that the users can store their files.
The text was updated successfully, but these errors were encountered: