Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] Setting wait on a kubectl_manifest resource changes the delete cascade mode #156

Closed
stevehipwell opened this issue Aug 12, 2024 · 7 comments · Fixed by #165
Closed
Labels
enhancement New feature or request

Comments

@stevehipwell
Copy link
Contributor

When the wait argument is set on a kubectl_manifest resource the delete is modified to use meta_v1.DeletePropagationForeground instead of the default meta_v1.DeletePropagationBackground which is the default behaviour of using the kubectl binary directly to delete. It looks like this behaviour was introduced a while ago but I can't see any justification as to why the change was required. By not following the default kubectl behaviour it causes issues and is very hard to debug.

if waitForDelete {
propagationPolicy = meta_v1.DeletePropagationForeground
}

I'd like to see the default changed back to meta_v1.DeletePropagationBackground so this provider functions the same as kubectl. The cascade option could then be exposed as an argument (e.g. delete_cascade) to support setting meta_v1.DeletePropagationForeground; this could also be used to implement #145.

@alekc
Copy link
Owner

alekc commented Aug 14, 2024

The answer is simple, legacy :)

This gavinbunney/terraform-provider-kubectl#153 PR in original provider was needed to fix the gavinbunney/terraform-provider-kubectl#109. (where you've participated as well :p )

Without it, what was happening is that it would fire the delete command but it will not wait for the actual result, which was causing a missing dependency chain reaction.

In theory, it would be possible to amend the functionality to bring it close to the original kubectl, but it would be a breaking change and would need to go towards 3.0 release

@alekc alekc added the enhancement New feature or request label Aug 14, 2024
@stevehipwell
Copy link
Contributor Author

@alekc my question was more around why couldn't the original implementation mirror the kubectl behaviour where background and wait work correctly. I'm aware that this decision was a legacy one and made in the provider that this was forked from, but do you or anyone else have any context as to why the implementation needed to be done this way?

Exposing the delete cascade option and fixing the wait when using the background option shouldn't need a v3 but I agree that changing the default would. I'm happy to take a look at this but if you've already investigated I'd appreciate any context you've gained?

@alekc
Copy link
Owner

alekc commented Aug 19, 2024

It could (tbh I would prefer it to have the same behaviour as kubectl), but I can only put it on 3.x release to avoid breaking pipelines for other people.

@stevehipwell
Copy link
Contributor Author

@alekc I've opened a PR to allow the delete cascade to be specified without changing the current defaults.

@stevehipwell
Copy link
Contributor Author

@alekc do you have a rough ETA for when this will be released?

@alekc
Copy link
Owner

alekc commented Aug 22, 2024

I need to review (and amend if needed given the latest merge) #157, a round of manual testing and it can go live. I would expect the end of the week top (sadly the life doesn't leave a lot of extra time)

@stevehipwell
Copy link
Contributor Author

@alekc I've added #166 to switch the logic for wait_for_rollout to use a watch. I was planning on giving wait_for the same treatment but I think #157 needs to land first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants