Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PR #1994/5248e3ff backport][stable-7] eks_nodegroup - wait for deletion of both node groups #1995

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
trivial:
- eks_nodegroup - update integration test to wait for both nodegroups to be deleted.
minor_changes:
- eks_nodegroup - ensure wait also waits for deletion to complete when ``wait==True`` (https://github.com/ansible-collections/community.aws/pull/1994).
27 changes: 19 additions & 8 deletions plugins/modules/eks_nodegroup.py
Original file line number Diff line number Diff line change
Expand Up @@ -607,18 +607,29 @@ def delete_nodegroups(client, module):
clusterName = module.params["cluster_name"]
existing = get_nodegroup(client, module, name, clusterName)
wait = module.params.get("wait")
if not existing or existing["status"] == "DELETING":
module.exit_json(changed=False, msg="Nodegroup not exists or in DELETING status.")
if not module.check_mode:
try:
client.delete_nodegroup(clusterName=clusterName, nodegroupName=name)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg=f"Couldn't delete Nodegroup {name}.")

if not existing:
module.exit_json(changed=False, msg="Nodegroup '{name}' does not exist")

if existing["status"] == "DELETING":
if wait:
wait_until(client, module, "nodegroup_deleted", name, clusterName)
module.exit_json(changed=False, msg="Nodegroup '{name}' deletion complete")
module.exit_json(changed=False, msg="Nodegroup '{name}' already in DELETING state")

if module.check_mode:
module.exit_json(changed=True, msg="Nodegroup '{name}' deletion would be started (check mode)")

try:
client.delete_nodegroup(clusterName=clusterName, nodegroupName=name)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg=f"Couldn't delete Nodegroup '{name}'.")

if wait:
wait_until(client, module, "nodegroup_deleted", name, clusterName)
module.exit_json(changed=True, msg="Nodegroup '{name}' deletion complete")

module.exit_json(changed=True)
module.exit_json(changed=True, msg="Nodegroup '{name}' deletion started")


def get_nodegroup(client, module, nodegroup_name, cluster_name):
Expand Down
4 changes: 2 additions & 2 deletions tests/integration/targets/eks_nodegroup/tasks/cleanup.yml
Original file line number Diff line number Diff line change
Expand Up @@ -74,10 +74,10 @@
state: absent
vpc_id: '{{ setup_vpc.vpc.id}}'
ignore_errors: 'yes'

- name: remove setup VPC
ec2_vpc_net:
cidr_block: 10.0.0.0/16
state: absent
name: '{{ resource_prefix }}_aws_eks'
ignore_errors: 'yes'
ignore_errors: 'yes'
8 changes: 4 additions & 4 deletions tests/integration/targets/eks_nodegroup/tasks/dependecies.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# This space was a copy by aws_eks_cluster integration test
- name: ensure IAM instance role exists
iam_role:
name: ansible-test-eks_cluster_role
name: ansible-test-{{ tiny_prefix }}-eks_nodegroup-cluster
assume_role_policy_document: '{{ lookup(''file'',''eks-trust-policy.json'') }}'
state: present
create_instance_profile: 'no'
Expand Down Expand Up @@ -44,7 +44,7 @@
community.aws.ec2_vpc_route_table:
vpc_id: '{{ setup_vpc.vpc.id }}'
tags:
Name: EKS
Name: "EKS-ng-{{ tiny_prefix }}"
subnets: '{{ setup_subnets.results | map(attribute=''subnet.id'') }}'
routes:
- dest: 0.0.0.0/0
Expand Down Expand Up @@ -77,9 +77,9 @@
- eks_create.name == eks_cluster_name

# Dependecies to eks nodegroup
- name: create IAM instance role
- name: create IAM instance role
iam_role:
name: 'ansible-test-eks_nodegroup'
name: 'ansible-test-{{ tiny_prefix }}-eks_nodegroup-ng'
assume_role_policy_document: '{{ lookup(''file'',''eks-nodegroup-trust-policy.json'') }}'
state: present
create_instance_profile: no
Expand Down
17 changes: 14 additions & 3 deletions tests/integration/targets/eks_nodegroup/tasks/full_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -445,7 +445,6 @@
state: absent
cluster_name: '{{ eks_cluster_name }}'
register: eks_nodegroup_result
check_mode: True

- name: check that eks_nodegroup is not changed (idempotency)
assert:
Expand Down Expand Up @@ -578,9 +577,21 @@
cluster_name: '{{ eks_cluster_name }}'
wait: True
register: eks_nodegroup_result
check_mode: True

- name: check that eks_nodegroup is not changed (idempotency)
assert:
that:
- eks_nodegroup_result is not changed
- eks_nodegroup_result is not changed

- name: wait for deletion of name_a nodegroup (idempotency)
eks_nodegroup:
name: '{{ eks_nodegroup_name_a }}'
state: absent
cluster_name: '{{ eks_cluster_name }}'
wait: True
register: eks_nodegroup_result

- name: check that eks_nodegroup is not changed (idempotency)
assert:
that:
- eks_nodegroup_result is not changed