-
-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[enhancement] ignore yaml_incluster field from kubectl provider #54
Comments
@ankitcharolia I feel your pain! Getting the same issue as well. What I suspect is happening is that most (all?) of the noise in yaml_incluster is caused by the property Having said that I have 2 tasks in front of me
|
Any progress? |
Not yet. Turns out the ignore_fields is just a hash, so the overall change in how the kubectl treats differences is required :( It's a bit of work, I will tackle it once I have some bandwidth available. |
Been dealing with this for a bit of time, and normally its fine, as kubectl apply will result in no changes so I haven't added it to the the ignore_fields in TF. However today I had an application that detects if kubectl is ran against any manifest, that it depends upon. This means even if no changes are detected it still restarts applications. Its an annoying app. Adding this to ignore_fields should have no impact as it will still apply when it detects changes in the Terraform vs what was applied last time Terraform ran. However, if this is in ignore_fields it won't detect if the YAML in the cluster has drifted for some reason (manual changes to the manifests directly). Just wanted to check my understanding is correct with this field before I add it to ignore_fields? |
sadly ignore_fields will not work, basically the incluster_yaml is a hash which is used by the kubectl to detect any changes in the live object. The issue is that any change (i.e. annotation, label, etc) will trigger such update. What needs to be done (and will likely be addressed in version 3 due to possible breaking changes it would bring) is the check between the manifest which is desired and the corrispondence of it on live object. There are some challenges mainly linked to the scenario where we remove some properties (what to do with it, try to delete remotely, leave it as it is, etc). So a bit of a minefield really... |
So, I have looked a bit more into the issue. My statement from before
was partially wrong. So, this is the code which deals with the yaml_incluster func getLiveManifestFingerprint(d *schema.ResourceData, userProvided *yaml.Manifest, liveManifest *yaml.Manifest) string {
var ignoreFields []string = nil
ignoreFieldsRaw, hasIgnoreFields := d.GetOk("ignore_fields")
if hasIgnoreFields {
ignoreFields = expandStringList(ignoreFieldsRaw.([]interface{}))
}
fields := getLiveManifestFields_WithIgnoredFields(ignoreFields, userProvided, liveManifest)
return getFingerprint(fields)
}
func getLiveManifestFields_WithIgnoredFields(ignoredFields []string, userProvided *yaml.Manifest, liveManifest *yaml.Manifest) string {
// there is a special user case for secrets.
// If they are defined as manifests with StringData, it will always provide a non-empty plan
// so we will do a small lifehack here
if userProvided.GetKind() == "Secret" && userProvided.GetAPIVersion() == "v1" {
if stringData, found := userProvided.Raw.Object["stringData"]; found {
// move all stringdata values to the data
for k, v := range stringData.(map[string]interface{}) {
encodedString := base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("%v", v)))
meta_v1_unstruct.SetNestedField(userProvided.Raw.Object, encodedString, "data", k)
}
// and unset the stringData entirely
meta_v1_unstruct.RemoveNestedField(userProvided.Raw.Object, "stringData")
}
}
flattenedUser := flatten.Flatten(userProvided.Raw.Object)
flattenedLive := flatten.Flatten(liveManifest.Raw.Object)
// remove any fields from the user provided set or control fields that we want to ignore
fieldsToTrim := append(kubernetesControlFields, ignoredFields...)
for _, field := range fieldsToTrim {
delete(flattenedUser, field)
// check for any nested fields to ignore
for k, _ := range flattenedUser {
if strings.HasPrefix(k, field+".") {
delete(flattenedUser, k)
}
}
}
// update the user provided flattened string with the live versions of the keys
// this implicitly excludes anything that the user didn't provide as it was added by kubernetes runtime (annotations/mutations etc)
var userKeys []string
for userKey, userValue := range flattenedUser {
normalizedUserValue := strings.TrimSpace(userValue)
// only include the value if it exists in the live version
// that is, don't add to the userKeys array unless the key still exists in the live manifest
if _, exists := flattenedLive[userKey]; exists {
userKeys = append(userKeys, userKey)
normalizedLiveValue := strings.TrimSpace(flattenedLive[userKey])
flattenedUser[userKey] = normalizedLiveValue
if normalizedUserValue != normalizedLiveValue {
log.Printf("[TRACE] yaml drift detected in %s for %s, was: %s now: %s", userProvided.GetSelfLink(), userKey, normalizedUserValue, normalizedLiveValue)
}
} else {
if normalizedUserValue != "" {
log.Printf("[TRACE] yaml drift detected in %s for %s, was %s now blank", userProvided.GetSelfLink(), userKey, normalizedUserValue)
}
}
}
sort.Strings(userKeys)
returnedValues := []string{}
for _, k := range userKeys {
returnedValues = append(returnedValues, fmt.Sprintf("%s=%s", k, flattenedUser[k]))
}
return strings.Join(returnedValues, ",")
}
func getFingerprint(s string) string {
fingerprint := sha256.New()
fingerprint.Write([]byte(s))
return fmt.Sprintf("%x", fingerprint.Sum(nil))
} Given the fact that it's unlikely that changes in I have thought about storing a real data in the state file and perform changes based on it. But it does have some challenges:
Any thoughts? |
I will run my plans in trace when I get a chance in work. I was doing the upgrades from 1.26 to 1.27 K8S so that may be playing a factor on one of the fields changing in an odd way. I will run it in trace and we should be able to see which field it is that is causing this noise. I have seen it in a few different manifests types as well so should be a useful thing to check. On the state secrets:
Being able to see the drift without trace would be nice. The only option I could think of is getting the live manifest and "hashing" the data values. Although hash is not guaranteed, it is better than the raw real secrets. You "could" hash the data values and persist the YAML with hashed values instead of real values. Overhead running that one each plan may not be ideal but that's the best I could think of. Hashing isn't guaranteed to be non-reservable unfortunately either. |
Good points. I think I will proceed in a following way (and release a beta version to try things out first):
Hopefully this will allow to target and ignore certain repeated entries (potentially declaring them globally on the provider level might be not a bad idea). |
Updated with more examples. Apologies for the double post, removed the first one for better readability. So I found more causes. I will look for the others in work week. I didn't test the new beta yet, but will check that when I get a chance as well. Memory and CPU resource one is obvious the cause and should be a simple fix my side.
The other is cluster wide resources where I set the
The final error is Istio helm deployment related. Dug into this and these updates are happening as K8S is updating these webhooks with cloud specific field exclusions that are not part of the core Istio deployment. So will just have to add these to the ignore_fields
|
Updated the above with the cause for Istio. Work has used a lot of my time this week so not had a chance to test the beta out yet. Spoke it through with some colleagues and we all came to the same conclusion:
I think considering this issue (or all cases are specific to my implementations I don't think the provider needs any changes at the moment for this. |
We are getting actually lot of noise during terraform plans.
it would be better of we could add
yaml_incluster
toignore_fields
The text was updated successfully, but these errors were encountered: