Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Provider produced inconsistent final plan #87

Open
Fresa opened this issue Mar 24, 2021 · 17 comments
Open

Error: Provider produced inconsistent final plan #87

Fresa opened this issue Mar 24, 2021 · 17 comments

Comments

@Fresa
Copy link

Fresa commented Mar 24, 2021

Plan showed:

  # kubectl_manifest.argocd_bootstrap will be updated in-place
  ~ resource "kubectl_manifest" "argocd_bootstrap" ***
        id                      = "/apis/argoproj.io/v1alpha1/namespaces/argocd/applications/bootstrap"
        name                    = "bootstrap"
      ~ yaml_body               = (sensitive value)
        # (13 unchanged attributes hidden)
    ***

Apply:

Error: Provider produced inconsistent final plan

When expanding the plan for kubectl_manifest.argocd_bootstrap to include new
values learned so far during apply, provider
"registry.terraform.io/gavinbunney/kubectl" produced an invalid new value for
.yaml_body_parsed: was cty.StringVal("apiVersion: argoproj.io/v1alpha1\nkind:
Application\nmetadata:\n  name: bootstrap\n  namespace: argocd\nspec:\n
destination:\n    namespace: default\n    server:
https://kubernetes.default.svc\n  project: default\n  source:\n    helm:\n
valueFiles:\n      - values.yaml\n    path: helm/charts/bootstrap\n
repoURL:
https://github.com/company/project.git\n
targetRevision: main\n  syncPolicy:\n    automated: ***\n"), but now
cty.StringVal("apiVersion: argoproj.io/v1alpha1\nkind:
Application\nmetadata:\n  name: bootstrap\n  namespace: argocd\nspec:\n
destination:\n    namespace: default\n    server:
https://kubernetes.default.svc\n  project: default\n  source:\n    helm:\n
parameters:\n      - name: argocd.namespace\n        value: argocd\n      -
name: ingress_nginx.namespace\n        value: ingress-nginx\n      - name:
ingress_nginx.controller.service.annotations.service\\.beta\\.kubernetes\\.io/azure-dns-label-name\n
value: my-dns\n      - name:
ingress_nginx.controller.service.loadBalancerIP\n        value:
1.2.3.4\n      - name: ingress_nginx.controller.autoscaling.enabled\n
value: \"true\"\n      - name:
ingress_nginx.controller.autoscaling.min_replicas\n        value: \"2\"\n
valueFiles:\n      - values.yaml\n    path: helm/charts/bootstrap\n
repoURL:
https://github.com/company/project.git\n
targetRevision: main\n  syncPolicy:\n    automated: ***\n").

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

The plan is incorrect. The manifest has changes, which seems to be detected during apply which is causing the inconsistent final plan error.

Any ideas why this is happening?

I changed one letter in the manifest afterwards and ran it again, and the plan picked up all the changes which were successfully applied.
Plan:

Terraform will perform the following actions:

  # kubectl_manifest.argocd_bootstrap will be updated in-place
  ~ resource "kubectl_manifest" "argocd_bootstrap" ***
        id                      = "/apis/argoproj.io/v1alpha1/namespaces/argocd/applications/bootstrap"
        name                    = "bootstrap"
      ~ yaml_body               = (sensitive value)
      ~ yaml_body_parsed        = <<-EOT
            apiVersion: argoproj.io/v1alpha1
            kind: Application
            metadata:
              name: bootstrap
              namespace: argocd
            spec:
              destination:
                namespace: default
                server: https://kubernetes.default.svc
              project: default
              source:
                helm:
          +       parameters:
          +       - name: argocd.namespace
          +         value: argocd
          +       - name: ingress_nginx.namespace
          +         value: ingress-nginx
          +       - name: ingress_nginx.controller.service.annotations.service\.beta\.kubernetes\.io/azure-dns-label-name
          +         value: my-dns
          +       - name: ingress_nginx.controller.service.loadBalancerIP
          +         value: 1.2.3.4
          +       - name: ingress_nginx.controller.autoscaling.enabled
          +         value: "true"
          +       - name: ingress_nginx.controller.autoscaling.minReplicas
          +         value: "2"
                  valueFiles:
                  - values.yaml
                path: helm/charts/bootstrap
                repoURL: https://github.com/company/project.git
                targetRevision: main
              syncPolicy:
                automated: ***
        EOT
        # (12 unchanged attributes hidden)
    ***

Plan: 0 to add, 1 to change, 0 to destroy.

Apply:

kubectl_manifest.argocd_bootstrap: Modifying... [id=/apis/argoproj.io/v1alpha1/namespaces/argocd/applications/bootstrap]
kubectl_manifest.argocd_bootstrap: Modifications complete after 0s [id=/apis/argoproj.io/v1alpha1/namespaces/argocd/applications/bootstrap]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

In the first run, I did upgrade the Kubernetes cluster at the same time. Not sure if that is related somehow. During plan:

  # azurerm_kubernetes_cluster.cluster1 will be updated in-place
  ~ resource "azurerm_kubernetes_cluster" "cluster1" ***
        id                              = "/subscriptions/abc123/rg1/providers/Microsoft.ContainerService/managedClusters/cluster1"
      ~ kubernetes_version              = "1.18.14" -> "1.19.7"
        name                            = "cluster1"
        tags                            = ***
            "environment" = "dev"
        ***
        # (15 unchanged attributes hidden)



      ~ default_node_pool ***
            name                   = "default"
          ~ orchestrator_version   = "1.18.14" -> "1.19.7"
            tags                   = ***
            # (14 unchanged attributes hidden)
        ***



        # (5 unchanged blocks hidden)
    ***

gavinbunney/kubectl v1.10.0
hashicorp/azurerm v2.46.1
terraform_version: 0.14.5

@yongzhang
Copy link

yongzhang commented Mar 25, 2021

I have the same issue, have to run terraform apply couple of times

@mmerickel
Copy link

mmerickel commented Mar 31, 2021

Ran into this as well. Re-running apply does get past the error the second time. But of course it comes back again if the resource is affected on another run. In my case it was due to the resource sourcing service info from a data source on the kubernetes provider. So each run it refreshes the data source which makes the plan think things dependent on it will need to change. In the case that the data source did change, this fails and then the next run seems to fix it because it has the current state of the data source from the last refresh so it knows what to do in the next plan without it being dynamically determined.

@ammartins
Copy link

Having the same issue, but even after running apply 3 times I still have it,. cause it keeps looking at the data field as a change.

@tehlers320
Copy link

tehlers320 commented May 4, 2021

I've been looking at this for a while now its the customdiff for sure. I commented out this line and recompiled. Everything works from here but i think this then doesnt provide a "changed" count.

In the example code they do not use "SetNew" at all, anybody know how to fix this? https://www.terraform.io/docs/extend/resources/customizing-differences.html

@manjunjiao
Copy link

Also encountered this problem. Re-try several times didn't solve the problem. Workaround for us is to terraform state rm the "inconsistent" component and terraform plan/apply again. Since the provider is performing "kubectl apply", it will then work.

@tvblomberg
Copy link

Having the same issue here. Any update on this?

@bcg62
Copy link

bcg62 commented Mar 7, 2022

also running into this often

@mandeepgoyat
Copy link

Same issue for me also. Any update ?

Error: Provider produced inconsistent result after app
provider "provider["registry.terraform.io/gavinbunney/kubectl"]" produced an
│ unexpected new value: Root resource was present, but now absent.

│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

@sbuvaneshkumar
Copy link

Same here as well.

 This is a bug in the provider, which should be reported in the provider's own issue tracker.


│ Error: Provider produced inconsistent final plan

│ When expanding the plan for kubectl_manifest.backend_deploy to include new values learned so far during apply, provider "registry.terraform.io/gavinbunney/kubectl"
│ produced an invalid new value for .yaml_body: inconsistent values for sensitive attribute.

│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵

@ppodevlabs
Copy link

i'm also facing the same issue, when using the override_namespace option, makes the plan change during apply

 When expanding the plan for module.kubernetes_apps.kubectl_manifest.namespace[3] to include new values learned so far during apply, provider "registry.terraform.io/gavinbunney/kubectl" produced an invalid new value for .yaml_body_parsed: was
│ cty.StringVal("apiVersion: v1\nkind: ServiceAccount\nmetadata:\n  labels:\n    app.kubernetes.io/component: dex-server\n    app.kubernetes.io/name: argocd-dex-server\n    app.kubernetes.io/part-of: argocd\n  name: argocd-dex-server\n"), but
│ now cty.StringVal("apiVersion: v1\nkind: ServiceAccount\nmetadata:\n  labels:\n    app.kubernetes.io/component: dex-server\n    app.kubernetes.io/name: argocd-dex-server\n    app.kubernetes.io/part-of: argocd\n  name: argocd-dex-server\n
│ namespace: argocd\n").
│

as you can see the namespace was not load during plan phase.

@euqen
Copy link

euqen commented Jul 5, 2022

We've also faced the same inconsistent plan issue. The reason was that it does not like references to some terraform resources (not variables):

This will not work:

resource "kubectl_manifest" "test" {
    yaml_body = <<YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    azure/frontdoor: enabled
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: "Prefix"
        backend:
          serviceName: ${terraform_resource.resource_name.id}
          servicePort: 80
YAML
}

In particular we used built-in random_id (https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/id) resource and references to such resources in YAML were producing inconsistent plan. However references to simple vars or local values were working absolutely fine

@ghost
Copy link

ghost commented Sep 25, 2022

The Issues #162 and #175 seem related to this issue as well. The error is very telling despite being messy.

Sorry in advanced, info will be lacking below. I'm just making notes so I can come back later to debug, fix, and PR. Reproducing it is fairly easy when I was testing today.

What Happened

Chronologically this is what happened.

  • kubectl_manifest.fluxcd-bootstrap-kustomization already owned an existing cluster resource
  • module.cert-manager-role was added, using terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks
  • references to module.cert-manager-role.iam_role_arn and module.cert-manager-role.iam_role_name were added to kubectl_manifest.fluxcd-bootstrap-kustomization.yaml_body
  • terraform plan worked without issue
  • terraform apply --auto-approve failed with Error: Provider produced inconsistent final plan

Error Output

Below is a stripped down copy of the error. It's pretty straight forward.

Error: Provider produced inconsistent final plan

When expanding the plan for kubectl_manifest.fluxcd-bootstrap-kustomization to include new values learned so far during apply, provider "registry.terraform.io/gavinbunney/kubectl" produced an invalid new value for .yaml_body_parsed: was cty.StringVal("manifest-data-1 was here"), but now cty.StringVal("manifest-data-2 was here").

This is a bug in the provider, which should be reported in the provider's own issue tracker.

Terraform thought that kubectl_manifest was done and accepted kubectl_manifest.fluxcd-bootstrap-kustomization.yaml_body before module.cert-manager-role.iam_role_arn and module.cert-manager-role.iam_role_name were resolved.

After module.cert-manager-role.iam_role_arn and module.cert-manager-role.iam_role_name were resolved, kubectl_manifest.fluxcd-bootstrap-kustomization.yaml_body proceeded to continue making changes.

Terraform noticed that the yaml_body change resolved out of order because what was being applied was different from when it was triggered to assume the work for yaml_body was done.

Other Observations

Manually expanded manifest-data-1 and manifest-data-2 from the error output to see what it looked like. What I saw was:

  • manifest-data-1 didn't have my newly created role values filled in
  • manifest-data-2 did have my newly created role values filled in

Where to Begin?

Need to track down how terraform and providers communicate when a value should no longer mutate. Then confirm this kubectl provider actually adheres to that.


*cough* *cough*... wish me luck - 🥲

@ghost
Copy link

ghost commented Oct 12, 2022

Edit: I retract this statement... it was a terrible idea that didn't work as desired.

@mmerickel
Copy link

mmerickel commented Oct 12, 2022

To my knowledge the kubernetes provider doesn't have a way to ingest a multi-document yaml file, yamldecode() doesn't even support reading them. Yes I could split them out myself into a folder before adding them to terraform but it's a nice feature of the kubectl provider.

@ghost
Copy link

ghost commented Oct 12, 2022

Edited my prior comment. After testing it... I found two problems

  1. The kubernetes provider is not going to be viable if you have CR's that wont exist until a dependency is applied.
  2. It will just complain when it's used inside a module with yamldecode, if the contents contains values that need to be resolved, which leads to converting the entire thing to hcl.

If there is ever an input to skip validating the kind exists, the kubernetes provider is going to be viable.

Can't really speak to having a need to feed multiple manifests in at once. My use-case is coordinating roughly 5 resources to get info exposed in the cluster and bootstrapping flux v2 for gitops.

@AmitKatyal-Sophos
Copy link

creating the resource using the templatefile seems to be working fine.

resource "kubectl_manifest" "argocd_apps" {
for_each = XXXXX
yaml_body = templatefile("${path.module}/manifests/apps.yaml", {
})
wait = "true"
}

@darkxeno
Copy link

for me using the templatefile doesn't solve the issue, my use case includes a very dynamic yaml used to generate helm values.

comparing the previous and the new values i can't get any indication about why it's failing as the changes observed on the yaml are correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests