-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: Provider produced inconsistent final plan #87
Comments
I have the same issue, have to run |
Ran into this as well. Re-running apply does get past the error the second time. But of course it comes back again if the resource is affected on another run. In my case it was due to the resource sourcing service info from a data source on the kubernetes provider. So each run it refreshes the data source which makes the plan think things dependent on it will need to change. In the case that the data source did change, this fails and then the next run seems to fix it because it has the current state of the data source from the last refresh so it knows what to do in the next plan without it being dynamically determined. |
Having the same issue, but even after running apply 3 times I still have it,. cause it keeps looking at the data field as a change. |
I've been looking at this for a while now its the customdiff for sure. I commented out this line and recompiled. Everything works from here but i think this then doesnt provide a "changed" count. In the example code they do not use "SetNew" at all, anybody know how to fix this? https://www.terraform.io/docs/extend/resources/customizing-differences.html |
Also encountered this problem. Re-try several times didn't solve the problem. Workaround for us is to |
Having the same issue here. Any update on this? |
also running into this often |
Same issue for me also. Any update ? Error: Provider produced inconsistent result after app |
Same here as well. This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for kubectl_manifest.backend_deploy to include new values learned so far during apply, provider "registry.terraform.io/gavinbunney/kubectl"
│ produced an invalid new value for .yaml_body: inconsistent values for sensitive attribute.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵ |
i'm also facing the same issue, when using the
as you can see the namespace was not load during plan phase. |
We've also faced the same inconsistent plan issue. The reason was that it does not like references to some terraform resources (not variables): This will not work: resource "kubectl_manifest" "test" {
yaml_body = <<YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
azure/frontdoor: enabled
spec:
rules:
- http:
paths:
- path: /testpath
pathType: "Prefix"
backend:
serviceName: ${terraform_resource.resource_name.id}
servicePort: 80
YAML
} In particular we used built-in random_id (https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/id) resource and references to such resources in YAML were producing inconsistent plan. However references to simple vars or local values were working absolutely fine |
The Issues #162 and #175 seem related to this issue as well. The error is very telling despite being messy. Sorry in advanced, info will be lacking below. I'm just making notes so I can come back later to debug, fix, and PR. Reproducing it is fairly easy when I was testing today. What HappenedChronologically this is what happened.
Error OutputBelow is a stripped down copy of the error. It's pretty straight forward.
Terraform thought that After Terraform noticed that the yaml_body change resolved out of order because what was being applied was different from when it was triggered to assume the work for yaml_body was done. Other ObservationsManually expanded manifest-data-1 and manifest-data-2 from the error output to see what it looked like. What I saw was:
Where to Begin?Need to track down how terraform and providers communicate when a value should no longer mutate. Then confirm this kubectl provider actually adheres to that. *cough* *cough*... wish me luck - 🥲 |
Edit: I retract this statement... it was a terrible idea that didn't work as desired. |
To my knowledge the kubernetes provider doesn't have a way to ingest a multi-document yaml file, |
Edited my prior comment. After testing it... I found two problems
If there is ever an input to skip validating the kind exists, the kubernetes provider is going to be viable. Can't really speak to having a need to feed multiple manifests in at once. My use-case is coordinating roughly 5 resources to get info exposed in the cluster and bootstrapping flux v2 for gitops. |
creating the resource using the templatefile seems to be working fine. resource "kubectl_manifest" "argocd_apps" { |
for me using the templatefile doesn't solve the issue, my use case includes a very dynamic yaml used to generate helm values. comparing the previous and the new values i can't get any indication about why it's failing as the changes observed on the yaml are correct. |
Plan showed:
Apply:
The plan is incorrect. The manifest has changes, which seems to be detected during apply which is causing the inconsistent final plan error.
Any ideas why this is happening?
I changed one letter in the manifest afterwards and ran it again, and the plan picked up all the changes which were successfully applied.
Plan:
Apply:
In the first run, I did upgrade the Kubernetes cluster at the same time. Not sure if that is related somehow. During plan:
gavinbunney/kubectl v1.10.0
hashicorp/azurerm v2.46.1
terraform_version: 0.14.5
The text was updated successfully, but these errors were encountered: