Alternative way to include AWS EKS cluster #22000
tmisch
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We are hosting an Argo CD installation on a Kubernetes cluster (non AWS EKS) inside our company internal network. The cluster has access to the internet and we want to add an AWS EKS based cluster to Argo CD, to let Argo CD manage some infrastructural applications on the AWS EKS cluster that are required to deploy and run our own application e.g. for demonstration purposes outside the company.
The version of Argo CD currently running on our systems is
v2.12.6
.I tried to add an EKS cluster to Argo CD using declarative setup as described in the official documentation Using An AWS Profile For Authentication.
The
aws-auth
config map is available in the EKS cluster and the corresponding IAM user already allows us to access the cluster and successfully deploy applications to it from within our deployment pipelines.I have added the
volume
andvolumeMount
with the profiles file (resp. credentials file) from a secret to the deployment of thecontroller
and theserver
component, and verified the mounted file in the two pods.But i left out the
aws_session_token
from the credentials file, since session tokens provide temporary access (at least according to AWS_SESSION_TOKEN in AWS documentation ) and i think that permanent access is required for Argo CD. I do not want to update my secret credentials file in Kubernetes every hour or day!? In other use cases, it is sufficient to specifyaws_access_key_id
andaws_secret_access_key
in the credentials file.Unfortunately, using this configuration the cluster cannot be successfully integrated in Argo CD. Instead the following error is shown:
error synchronizing cache state : Get "https://<...>.eks.amazonaws.com/version?timeout=32s": getting credentials: exec: executable argocd-k8s-auth failed with exit code 20 (Client.Timeout exceeded while awaiting headers)
I was wondering about the value of the
profile
configuration variable in the cluster secret mentioned in the docs, that points to the credentials file. As far as i know from the AWS documentation, the environment variable AWS_PROFILE specifies the name of a profile inside the credentials file, not the credentials file itself. By checking the sources of Argo CD andargocd-k8s-auth
i came to the conclusion, that the meaning of theprofile
config setting could be identical to this environment variable. Due to that i changed configuration of the Argo CD helm chart and the cluster secret and was able to successfully integrate the cluster and deploy a test application to it using Argo CD.These are the changes to my configuration:
In the cluster secret, place the name of a profile that is listed in the credentials file instead of the path to the credentials file:
Define AWS standard environment variable
AWS_SHARED_CREDENTIALS_FILE
using helm values forargocd-server
andargocd-application-controller
in addition to thevolume
andvolumeMount
specification already mentioned in Argo CD documentation:To my opinion, this setup is now very close to official AWS documentation.
Should the Argo CD documentation get updated to show this way of configuration in section Using An AWS Profile For Authentication?
BTW:
I needed to use yaml keys
volume
andvolumeMounts
instead of keysextraVolumes
andextraVolumeMounts
stated in the documentation. This could also be updated.Beta Was this translation helpful? Give feedback.
All reactions