Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR exposes the drift correction functionality of Sveltos via the
ClusterDeployment
andMultiClusterService
objects by:.spec.serviceSpec.syncMode
: Sveltos will automatically deploy a drift-detection-manager on the managed cluster by setting this toContinuousWithDriftDetection
..spec.serviceSpec.driftExclusions
: To exclude any particular field of an object from drift detection/correction..spec.serviceSpec.ignoreDrift
: To exclude any object as a whole from drift detection/correction.Verification
I created a new managed cluster with the following
ClusterDeployment
:Sveltos drift-detection-manager installed on managed cluster
Verified that Sveltos automatically installs the drift-detection-manager on the managed cluster when using
syncMode: ContinuousWithDriftDetection
:➜ ~ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx ingress-nginx-controller-86bd747cf9-2gc98 1/1 Running 0 123m ingress-nginx ingress-nginx-controller-86bd747cf9-2prlk 1/1 Running 0 123m ingress-nginx ingress-nginx-controller-86bd747cf9-bzhb4 1/1 Running 0 123m kube-system aws-cloud-controller-manager-cgcnk 1/1 Running 0 124m kube-system calico-kube-controllers-6cd7d8cc9f-7vp67 1/1 Running 0 125m kube-system calico-node-w22bx 1/1 Running 0 125m kube-system calico-node-z2tsg 1/1 Running 0 124m kube-system coredns-679c655b6f-6mgct 1/1 Running 0 124m kube-system coredns-679c655b6f-qrrc4 1/1 Running 0 124m kube-system ebs-csi-controller-977d5cc56-hztb2 5/5 Running 0 125m kube-system ebs-csi-controller-977d5cc56-mj758 5/5 Running 0 125m kube-system ebs-csi-node-gj6nj 3/3 Running 0 124m kube-system ebs-csi-node-vqc6n 3/3 Running 0 125m kube-system kube-proxy-kmzcq 1/1 Running 0 124m kube-system kube-proxy-qdkjq 1/1 Running 0 125m kube-system metrics-server-78c4ccbc7f-qxxz4 1/1 Running 0 125m projectsveltos drift-detection-manager-6767d5bf67-7dt9x 1/1 Running 0 125m
Verifying that Sveltos corrects drift
I manually edited the
ingress-nginx-controller
Deployment and changed.spec.replicas=1
to introduce drift. After a few seconds, Sveltos recognized the drift and corrected it as can be seen by the watch on.spec.replicas
below. I.e., it starts as 3 then changes to 1 as I manually edit it and then eventually comes back to 3 as Sveltos corrects the drift:We can also see 2 of the pods have a younger age as expected:
➜ ~ kubectl -n ingress-nginx get pod NAME READY STATUS RESTARTS AGE ingress-nginx-controller-86bd747cf9-2wbqz 1/1 Running 0 8m43s ingress-nginx-controller-86bd747cf9-bzhb4 1/1 Running 0 152m ingress-nginx-controller-86bd747cf9-tkhmh 1/1 Running 0 8m43s
Verifying that
driftExclusions
workI used to following drift exclusion in the
ClusterDeployment
object to exclude.spec.replicas
from drift correction:Now when I manually edit the replicas to be 1, the number of replicas is not corrected back to 3 as can be seen below:
We can also verify that this is the case by observing that the "generation" of the
ResourceSummary
has progressed and by seeing the following patch in its spec:Removing the drift exclusion
The drift exclusion can be removed by removing the
.spec.serviceSpec.driftExclusion
field from theClusterDeployment
object and re-trigger the drift correction by editing any field in theingress-nginx-controller
Deployment. This will force a drift correction and since the drift exclusion has been removed, it will restore the Deployment to it's original spec.Verifying that
ignoreDrift
works:I manually removed the label
app.kubernetes.io/managed-by=Helm
from the deployment and Sveltos corrected it as expected and as can be seen the watch below:➜ ~ kubectl -n ingress-nginx get deployments.apps ingress-nginx-controller --show-labels -w NAME READY UP-TO-DATE AVAILABLE AGE LABELS ingress-nginx-controller 3/3 3 3 171m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,app.kubernetes.io/version=1.11.0,helm.sh/chart=ingress-nginx-4.11.0 ingress-nginx-controller 3/3 3 3 172m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,app.kubernetes.io/version=1.11.0,helm.sh/chart=ingress-nginx-4.11.0 ingress-nginx-controller 3/3 3 3 172m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,app.kubernetes.io/version=1.11.0,helm.sh/chart=ingress-nginx-4.11.0
So now let's tell Sveltos to ignore any changes to the
ingress-nginx-controller
Deployment with the following:Now when I manually remove the
app.kubernetes.io/managed-by=Helm
label again, I can observe that Sveltos does not correct the drift:➜ ~ kubectl -n ingress-nginx get deployments.apps ingress-nginx-controller --show-labels -w NAME READY UP-TO-DATE AVAILABLE AGE LABELS ingress-nginx-controller 3/3 3 3 3h58m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,app.kubernetes.io/version=1.11.0,helm.sh/chart=ingress-nginx-4.11.0 ingress-nginx-controller 3/3 3 3 3h59m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,app.kubernetes.io/version=1.11.0,helm.sh/chart=ingress-nginx-4.11.0
This can also be verified by observing that
ignoreForConfigurationDrift: true
for the object targeted for drift ignore inResourceSummary
spec on the managed cluser:And also by observing that the
projectsveltos.io/driftDetectionIgnore: ok
annotation has been applied to the targeted object:Removing the ignore drift:
The ignore drift setting can be removed by removing the
.spec.serviceSpec.ignoreDrift
field from theClusterDeployment
object, Sveltos will then automatically re-trigger the drift correction.