Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Export config from module #93

Merged
merged 4 commits into from
Nov 16, 2021

Conversation

keithmattix
Copy link
Contributor

The current module outputs aren't great; this PR takes a stab at fixing that by exporting the kubeconfig from the controller module.

@keithmattix keithmattix marked this pull request as ready for review October 13, 2021 23:01
@keithmattix
Copy link
Contributor Author

Fixes #65

@displague displague self-requested a review October 13, 2021 23:15
@keithmattix
Copy link
Contributor Author

@displague bump on this PR; is there anything else you're looking for?

@displague displague self-assigned this Oct 27, 2021
@displague displague added the enhancement New feature or request label Oct 27, 2021
modules/controller_pool/assets/kubeconfig_copy.sh Outdated Show resolved Hide resolved
modules/controller_pool/main.tf Outdated Show resolved Hide resolved
@keithmattix
Copy link
Contributor Author

@displague good call outs. I took care of them, except for the variable part of the kubeconfig export; path.root as a default isn't valid

@displague
Copy link
Member

displague commented Nov 10, 2021

I ran into a provisioning problem, @keithmattix:

╷
│ Error: local-exec provisioner error
│
│   with module.controllers.null_resource.kubeconfig,
│   on modules/controller_pool/main.tf line 67, in resource "null_resource" "kubeconfig":
│   67:   provisioner "local-exec" {
│
│ Error running command 'sh modules/controller_pool/assets/kubeconfig_copy.sh': exit status 1. Output: scp: /etc/kubernetes/admin.conf: No such file or directory

Perhaps this is a racing issue?

root@metal-multiarch-k8s-controller-primary:~# ls -latr /etc/kubernetes/admin.conf
-rw------- 1 root root 5592 Nov 10 17:06 /etc/kubernetes/admin.conf

@keithmattix
Copy link
Contributor Author

@displague I figured it out; there was a race between when Terraform marked the primary node as ready and when the setup script actually finished executing. I added a depends_on and sleep 360 to make sure the node has enough time to init kubeadm


output "kubernetes_kubeconfig" {
description = "Kubeconfig for the newly created cluster"
value = module.controllers.kubeconfig
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
value = module.controllers.kubeconfig
value = module.controllers.kubeconfig
sensitive = true


output "kubeconfig" {
description = "Kubeconfig for the newly created cluster"
value = data.local_file.kubeconfig
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
value = data.local_file.kubeconfig
value = abspath(data.local_file.kubeconfig)

@displague displague merged commit ad8554c into equinix:main Nov 16, 2021
@keithmattix keithmattix deleted the meaningful-outputs branch November 16, 2021 03:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants