Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to investigate what field keeps changing for yaml_body in kubectl_manifests #175

Open
martinjzyang opened this issue May 27, 2022 · 9 comments

Comments

@martinjzyang
Copy link

martinjzyang commented May 27, 2022

Hi!

We are using kubectl_manifest to directly apply manifests generated by istioctl manifest generate. It seems there is always some yaml_body drifts. How do I see what fields are causing the yaml body drifts? Can I either make yaml_body not a sensitive field or output the diff?

Currently we are getting ~70 resource changed each time a dependent resource changes, and at apply nothing actually gets applied.

Thanks!

Example output which is less than helpful:

  # module.k8s.module.istio.module.pilot_install.kubectl_manifest.resource_install["v1-ConfigMap-istio-sidecar-injector.yaml"] will be updated in-place
  ~ resource "kubectl_manifest" "resource_install" {
        id                      = "/api/v1/namespaces/istio-system/configmaps/istio-sidecar-injector"
        name                    = "istio-sidecar-injector"
      ~ yaml_body               = (sensitive value)
        # (15 unchanged attributes hidden)
    }

  # module.k8s.module.istio.module.pilot_install.kubectl_manifest.resource_install["v1-ConfigMap-istio.yaml"] will be updated in-place
  ~ resource "kubectl_manifest" "resource_install" {
        id                      = "/api/v1/namespaces/istio-system/configmaps/istio"
        name                    = "istio"
      ~ yaml_body               = (sensitive value)
        # (15 unchanged attributes hidden)
    }

  # module.k8s.module.istio.module.pilot_install.kubectl_manifest.resource_install["v1-Service-istiod.yaml"] will be updated in-place
  ~ resource "kubectl_manifest" "resource_install" {
        id                      = "/api/v1/namespaces/istio-system/services/istiod"
        name                    = "istiod"
      ~ yaml_body               = (sensitive value)
        # (15 unchanged attributes hidden)
    }

Plan: 1 to add, 67 to change, 1 to destroy 
EKS version: 1.22
Terraform provider versions:
terraform {
  required_version = "1.1.5"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.75.1"
    }
    helm = {
      source  = "hashicorp/helm"
      version = "~> 2.4.1"
    }
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.7.0"
    }
}
@martinjzyang
Copy link
Author

We would also occasionally get these errors that are difficult to debug even though we have yaml_incluster added as an ignored_field

│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for module.k8s.module.istio.module.pilot_install.kubectl_manifest.resource_install["admissionregistration.k8s.io_v1-MutatingWebhookConfiguration-istio-sidecar-injector.yaml"] to include new values learned so far during apply, provider
│ "registry.terraform.io/gavinbunney/kubectl" produced an invalid new value for .yaml_incluster: inconsistent values for sensitive attribute.
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

resource definition:

resource "kubectl_manifest" "resource_install" {
  for_each = fileset("${local.install_manifest_path}", "*.yaml")
  yaml_body = replace(file("${local.install_manifest_path}/${each.value}"),
    "benchling.com/terraform_will_replace_this_docker_repo",
  local.istio_docker_repo)

  ignore_fields = ["metadata", "status", "yaml_incluster"]

  depends_on = [kubectl_manifest.dependency_install]
}

@mmerickel
Copy link

I dug into this a little bit in our app and noticed that there's no actual difference in the content. For example, you can use terraform plan -out tfplan; terraform show -json tfplan | python3 -m json.tool > plan.json to view the json and compare, and below you'll see the before and after are identical other than that yaml_incluster is unknown.

Using kubectl provider 1.14.0 with terraform 1.2.2.

        {
            "address": "module.argocd_mgmt_app.kubectl_manifest.this[\"manifests/app.yaml\"]",
            "module_address": "module.argocd_mgmt_app",
            "mode": "managed",
            "type": "kubectl_manifest",
            "name": "this",
            "index": "manifests/app.yaml",
            "provider_name": "registry.terraform.io/gavinbunney/kubectl",
            "change": {
                "actions": [
                    "update"
                ],
                "before": {
                    "api_version": "argoproj.io/v1alpha1",
                    "apply_only": false,
                    "force_conflicts": false,
                    "force_new": false,
                    "id": "/apis/argoproj.io/v1alpha1/namespaces/argocd/applications/argocd-admin-mgmt",
                    "ignore_fields": null,
                    "kind": "Application",
                    "live_manifest_incluster": "e63585903c76061b8f50e2ffde57457f67d623f118a1bb3fe3ce43faba212597",
                    "live_uid": "bc9eca54-50bd-499a-844b-aa7eb75da9b9",
                    "name": "argocd-admin-mgmt",
                    "namespace": "argocd",
                    "override_namespace": null,
                    "sensitive_fields": null,
                    "server_side_apply": false,
                    "timeouts": null,
                    "uid": "bc9eca54-50bd-499a-844b-aa7eb75da9b9",
                    "validate_schema": true,
                    "wait": null,
                    "wait_for_rollout": true,
                    "yaml_body": "apiVersion: argoproj.io/v1alpha1\nkind: Application\nmetadata:\n  name: argocd-admin-mgmt\n  namespace: argocd\n  finalizers:\n    - resources-finalizer.argocd.argoproj.io\nspec:\n  project: argocd-admin-mgmt\n\n  destination:\n    server: https://kubernetes.default.svc\n    namespace: argocd\n\n  source:\n    repoURL: [email protected]:foo/argocd-admin-mgmt.git\n    targetRevision: HEAD\n    path: argocd\n\n    directory:\n      recurse: true\n\n  syncPolicy:\n    automated:\n      prune: false\n      selfHeal: true\n",
                    "yaml_body_parsed": "apiVersion: argoproj.io/v1alpha1\nkind: Application\nmetadata:\n  finalizers:\n  - resources-finalizer.argocd.argoproj.io\n  name: argocd-admin-mgmt\n  namespace: argocd\nspec:\n  destination:\n    namespace: argocd\n    server: https://kubernetes.default.svc\n  project: argocd-admin-mgmt\n  source:\n    directory:\n      recurse: true\n    path: argocd\n    repoURL: [email protected]:foo/argocd-admin-mgmt.git\n    targetRevision: HEAD\n  syncPolicy:\n    automated:\n      prune: false\n      selfHeal: true\n",
                    "yaml_incluster": "cb0fb112cfe72c51446e34f53b6f89b9d81f8838bb2832b34873e3749537e043"
                },
                "after": {
                    "api_version": "argoproj.io/v1alpha1",
                    "apply_only": false,
                    "force_conflicts": false,
                    "force_new": false,
                    "id": "/apis/argoproj.io/v1alpha1/namespaces/argocd/applications/argocd-admin-mgmt",
                    "ignore_fields": null,
                    "kind": "Application",
                    "live_manifest_incluster": "e63585903c76061b8f50e2ffde57457f67d623f118a1bb3fe3ce43faba212597",
                    "live_uid": "bc9eca54-50bd-499a-844b-aa7eb75da9b9",
                    "name": "argocd-admin-mgmt",
                    "namespace": "argocd",
                    "override_namespace": null,
                    "sensitive_fields": null,
                    "server_side_apply": false,
                    "timeouts": null,
                    "uid": "bc9eca54-50bd-499a-844b-aa7eb75da9b9",
                    "validate_schema": true,
                    "wait": null,
                    "wait_for_rollout": true,
                    "yaml_body": "apiVersion: argoproj.io/v1alpha1\nkind: Application\nmetadata:\n  name: argocd-admin-mgmt\n  namespace: argocd\n  finalizers:\n    - resources-finalizer.argocd.argoproj.io\nspec:\n  project: argocd-admin-mgmt\n\n  destination:\n    server: https://kubernetes.default.svc\n    namespace: argocd\n\n  source:\n    repoURL: [email protected]:foo/argocd-admin-mgmt.git\n    targetRevision: HEAD\n    path: argocd\n\n    directory:\n      recurse: true\n\n  syncPolicy:\n    automated:\n      prune: false\n      selfHeal: true\n",
                    "yaml_body_parsed": "apiVersion: argoproj.io/v1alpha1\nkind: Application\nmetadata:\n  finalizers:\n  - resources-finalizer.argocd.argoproj.io\n  name: argocd-admin-mgmt\n  namespace: argocd\nspec:\n  destination:\n    namespace: argocd\n    server: https://kubernetes.default.svc\n  project: argocd-admin-mgmt\n  source:\n    directory:\n      recurse: true\n    path: argocd\n    repoURL: [email protected]:foo/argocd-admin-mgmt.git\n    targetRevision: HEAD\n  syncPolicy:\n    automated:\n      prune: false\n      selfHeal: true\n"
                },
                "after_unknown": {
                    "yaml_incluster": true
                },
                "before_sensitive": {
                    "live_manifest_incluster": true,
                    "yaml_body": true,
                    "yaml_incluster": true
                },
                "after_sensitive": {
                    "live_manifest_incluster": true,
                    "yaml_body": true,
                    "yaml_incluster": true
                }
            }
        }

@Kieran-Bacon
Copy link

I am experiencing the same behaviour when I try to run my scripts on another machine. All of the manifest resources have these in-place changes to their yaml_body content despite not being updated. Manifest with genuine changes require an in-place change of the yaml_body_parsed.

Since its moving machine-to-machine that has prompted it, I was going to investigate whether it's a file metadata thing and git that is causing the problem.

@Galileo1
Copy link

Galileo1 commented Jun 5, 2023

Has anyone been able to fins a solution to this. Still happening for me?
+1

@rohailmalhi-nbs
Copy link

Issue is still there

@amine-zembra
Copy link

Same here , Issue is still there

@devlifealways
Copy link

Same here

@man0s
Copy link

man0s commented Apr 5, 2024

Same as well!

@alekc
Copy link
Contributor

alekc commented Apr 6, 2024

So, there is a broader discussion in the fork alekc/terraform-provider-kubectl#54

Long story short, you need to set TF_LOG=TRACE in order to see what's causing the issue.

Most of the time its going to be a difference where

  • you set a field to nil and kubernetes/some operator removes it causing a diff.
  • you set for example a cpu resource to 1000mi and it gets changed to 1, etc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants