-
-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Karpenter CRDs creation complete, when deletion hasn't finished yet #164
Comments
Not sure I am following, could you paste related tf plan?
…On Fri, 16 Aug 2024 at 16:50, NicoForce ***@***.***> wrote:
For starters, thanks Alekc for keeping this project maintained.
I've been deploying Karpenter CRDs using the kubectl provider latest
version. The karpenter ec2nodeclasses take a few minutes to be deleted,
which have a finalizer that wait for the termination of related ec2
instances.
Due to the recent Karpenter version upgrade which changed the Api version
for the nodepool and ec2nodeclass crds, resources needed to be replaced.
Weirdly enough, terraform reports the deletion and creation of the CRDs
successful and only takes a few seconds; however, the ec2nodeclasses for
example are still on the deletion process.
I tried using the wait = true flag, but it gave the same result, also
looked into the wait_for attribute, but I don't see anything in the CRD
status that would be useful.
Is there any other attribute that I'm missing for this to work as expected?
—
Reply to this email directly, view it on GitHub
<#164>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACJ5V4ZZL6XY2UHLJKL7D3ZRYNV3AVCNFSM6AAAAABMUJQWCKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGQ3TANJSGY3DAOI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
I will try to recreate the behavior and put it here, but tl;dr is terraform says kubectl_manifest resources has been destroyed and created succesfully, when in reality it hasn't even finished destroying. |
I encounter a similar issue when I try to do terraform destroy. It will try to delete Karpenter node class and node pools but don't wait until they are deleted. Maybe we can backport this feature to this repo? kubernetes/kubernetes#64034 |
@NicoForce you need to use the |
@stevehipwell I did use |
@NicoForce which provider version are you on? |
@stevehipwell 2.0.4 |
@NicoForce I think you might have to provide your config and the TF plan to figure out what's happening. |
Just FIY, a beta with latest merged PR has been released, you might want to try that.
|
Closing this off since 2.1.1 has been released. Please reopen if the issue still persists. |
For starters, thanks Alekc for keeping this project maintained.
I've been deploying Karpenter CRDs using the kubectl provider latest version. The karpenter ec2nodeclasses take a few minutes to be deleted, which have a finalizer that wait for the termination of related ec2 instances.
Due to the recent Karpenter version upgrade which changed the Api version for the nodepool and ec2nodeclass crds, resources needed to be replaced. Weirdly enough, terraform reports the deletion and creation of the CRDs successful and only takes a few seconds; however, the ec2nodeclasses for example are still on the deletion process.
I tried using the
wait = true
flag, but it gave the same result, also looked into thewait_for
attribute, but I don't see anything in the CRD status that would be useful.Is there any other attribute that I'm missing for this to work as expected?
The text was updated successfully, but these errors were encountered: