-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exporting ZFS pools in preparation for a move to a new cluster #601
Comments
I think you would need to scale down the controller and the csi-node to be able to delete the resources, otherwise they will trigger the operations. I believe the intention here is that the underlying zfs volume is to be kept while the k8s resources are removed, so why not scale down the controller and csi-node, do the force delete of the resources. Export the zpool, and then redo the creation after scaling the controller and csi-node backup? Does that make sense? |
In case it isn't clear, we have existing LocalPV-ZFS volumes that we want to keep in Anyway, do I understand correctly that your suggestion would require taking all the LocalPV-ZFS volumes offline temporarily by scaling down the LocalPV-ZFS controller and csi-nodes during the export process for the new drives? |
Can you please elaborate more about the use case. What i get is that you have a And your question is how to do this process effectively. |
Yes, the intention is not to delete any underlying ZFS volume while doing steps 3 and 4 but just delete the K8s resources(ZV,PVC,SC) so that once we recreate the SC, PVC, and ZV to import the newer one, it should be just a use existing ZFS volume operation rather than a fresh create. |
The use case is to make a mirror of the contents of drive(s) |
OK, thanks for clarifying. That would be pretty inconvenient to take all the LocalPV-ZFS volumes offline during the export. We may need to look for a different solution. |
I might not be grasping it fully, but how can we do a export without making the volumes offline at some point? cc @tiagolobocastro @avishnu any thoughts here? |
The remaining volumes should remain online because our components are simply a K8s control-plane for the kernel zfs volumes, so nothing will go offline here or am I missing something here? |
Hi, thanks for this project! We've been using it for over a year, and it's been incredibly stable and performant.
We'd like to do the following:
cluster-a
.StorageClass
in the cluster that encompasses the new pools.fstype: zfs
PVCs using theStorageClass
from step 3.zfs export
the ZFS pools created in step 2.cluster-a
.cluster-b
.In a nutshell, we want to use LocalPV-ZFS on
cluster-a
to create some new ZFS filesystems, then export the underlying pools so that we can move the drives fromcluster-a
tocluster-b
and access the existing filesystems oncluster-b
.Steps 1-5 and 8-9 are straightforward. We plan to use this helpful document to import the existing drives into
cluster-b
: https://github.com/openebs/zfs-localpv/blob/master/docs/import-existing-volume.mdHowever, it's not clear how to do step 6. I assume that, for starters, we'll want to create the
StorageClass
in step 3 withreclaimPolicy: Retain
, so that once the PVCs in step 4 are deleted fromcluster-a
, the underlying ZFS volumes aren't deleted from their pools. But what else do we need to do to make LocalPV-ZFS happy when we eventually export the ZFS pools and remove the drives fromcluster-a
? I assume some additional custom resources associated with the underlying ZFS resources will need to be deleted, but it's not clear how or what.The text was updated successfully, but these errors were encountered: