-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SSO auth mode -- use k8s OIDC directly #9325
Comments
Is this fixed by #7193? |
Sorta. We attach RoleBindings and ClusterRoleBindings based on group claims in an OIDC token, so we would need that to be implemented for the SubjectAccessReview mechanism. It also doesn't solve the Kubernetes access log problem, since in that PR, the Argo Workflows server ServiceAccount is still making the changes to Kubernetes. With our Kubernetes config, the OIDC tokens that Argo Workflows could get through SSO are directly usable within Kubernetes without impersonation, so they can be used exactly like the current "client" auth mechanism. We would just need the ability for Argo Workflows to use those tokens directly instead of trying to do some impersonation or delegation mechanism. |
@dqsully have u got multi-tenant working? what prevents team2 writing a yaml that uses team1's namespace/serviceacct? |
Yes, we're using multi-tenant clusters. I'm not quite sure what you're referring to @tooptoop4, but our engineers currently access Kubernetes using our IdP, and their permissions are governed by RoleBindings that apply to one or more groups, mapping up to the We explicitly don't follow patterns like Jenkins X, where everything deployed to a cluster exists in a single repository, because we've dealt with the security nightmare that that can create at scale. We also don't create one Kubernetes cluster per team to avoid excess costs. We have a relatively large standard "runtime" of addons for our Kubernetes clusters, and so managing fewer clusters overall means less bootstrapping cost and less management overhead. |
@dqsully what I mean is what prevents a user going into the Argo Workflows UI and submitting a new workflow yaml that has another team's serviceAccoutName and namespace in it? |
In the existing "client" auth mode, my understanding is that the Argo Workflows server will use the client's bearer token to communicate with the Kubernetes API directly, e.g. for creating a workflow object. If that token is only authorized to create a workflow in a certain namespace, then the user is only allowed to run the workflow under a ServiceAccount in that same namespace. A workflow in one namespace can't run containers in a separate namespace, right? |
@dqsully I mean just like
|
Right, so in a cluster install of Argo Workflows, if a user isn't allowed to create a workflow in the same namespace as Argo Workflows itself, then they can't use any of the Argo Workflows ServiceAccounts. |
on the argocd side i guess you use appProject to map between which git repos can use which namespaces ie https://github.com/argoproj/argo-cd/blob/v2.4.11/docs/operator-manual/project.yaml#L15-L19 |
Kiali already support like this feature. I had a good experience when configuring kiali's authentication. Usually, OIDC is already configured for authentication in Kubernetes Cluster. Kiali to use the same App as Cluster authentication, It will map to the RoleBinding defined in the cluster. In my cases, there are hundreds of users in the cluster, so permissions are managed by scopes: groups of OIDC and RoleBinding is managed in the cluster. You can set up with in kube-apiserver: --oidc-groups-claim and Here is RoleBinding examples subjects:
- kind: Group
name: "frontend-admins"
apiGroup: rbac.authorization.k8s.io If Argo Workflow can follow this flow, it will be very helpful in using it. |
Plus one on implementing this feature. We already leverage SubjectAccessReview to validate user access to a namespace and feel managing claims in OIDC would be too much as we do not own our OIDC but do own the RoleBindings, ServiceAccounts and Roles that get deployed to our cluster. |
Marking this as superseded by #12049, which has more upvotes (although this issue has some good details too) |
Summary
Currently, Argo Workflow's SSO auth mode verifies a user, and then delegates their access through a ServiceAccount in the Kubernetes cluster. The user's OIDC token is verified, and then used to pick a ServiceAccount (OIDC) token from Kubernetes to use on the user's behalf.
However, in our organization, the user's OIDC token can be valid for use within Kubernetes directly, with our Kubernetes RBAC already set up to apply according to group memberships on the OIDC token.
I would like for the Argo Workflows UI to support an SSO mode where the user's OIDC id token is used against Kubernetes directly, rather than being exchanged for a ServiceAccount.
(It should be noted that we techincally can already use our OIDC tokens with Argo Workflows now using the plain client auth method. However, our OIDC id tokens are short-lived, and aren't trivial to generate/extract and use. Having an SSO button in Argo Workflows that generates its own OIDC tokens and caches them in cookies with automatic token refreshing would be ideal.)
Use Cases
From the perspective of a manager of multi-tenant clusters, this would simplify and help centralize our access controls, and improve our audit logging as well. Argo Workflows's UI is essentially a fancy way to access certain parts of the Kubernetes API, and our users already use our IdP to access Kubernetes, so if Argo Workflows could piggyback off of that access then we have one less RBAC system to configure and manage. Also, without creating a ServiceAccount for each Argo Workflows user, the current SSO (server) auth mode's mechanism would "erase" information about which user performed which action in the Kubernetes access log, since the actions are taken by the delegated ServiceAccount instead of the user directly.
Message from the maintainers:
Love this enhancement proposal? Give it a 👍. We prioritise the proposals with the most 👍.
The text was updated successfully, but these errors were encountered: