Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SSO auth mode -- use k8s OIDC directly #9325

Closed
dqsully opened this issue Aug 9, 2022 · 12 comments
Closed

SSO auth mode -- use k8s OIDC directly #9325

dqsully opened this issue Aug 9, 2022 · 12 comments
Labels
area/sso-rbac solution/superseded This PR or issue has been superseded by another one (slightly different from a duplicate) type/feature Feature request

Comments

@dqsully
Copy link

dqsully commented Aug 9, 2022

Summary

Currently, Argo Workflow's SSO auth mode verifies a user, and then delegates their access through a ServiceAccount in the Kubernetes cluster. The user's OIDC token is verified, and then used to pick a ServiceAccount (OIDC) token from Kubernetes to use on the user's behalf.

However, in our organization, the user's OIDC token can be valid for use within Kubernetes directly, with our Kubernetes RBAC already set up to apply according to group memberships on the OIDC token.

I would like for the Argo Workflows UI to support an SSO mode where the user's OIDC id token is used against Kubernetes directly, rather than being exchanged for a ServiceAccount.

(It should be noted that we techincally can already use our OIDC tokens with Argo Workflows now using the plain client auth method. However, our OIDC id tokens are short-lived, and aren't trivial to generate/extract and use. Having an SSO button in Argo Workflows that generates its own OIDC tokens and caches them in cookies with automatic token refreshing would be ideal.)

Use Cases

From the perspective of a manager of multi-tenant clusters, this would simplify and help centralize our access controls, and improve our audit logging as well. Argo Workflows's UI is essentially a fancy way to access certain parts of the Kubernetes API, and our users already use our IdP to access Kubernetes, so if Argo Workflows could piggyback off of that access then we have one less RBAC system to configure and manage. Also, without creating a ServiceAccount for each Argo Workflows user, the current SSO (server) auth mode's mechanism would "erase" information about which user performed which action in the Kubernetes access log, since the actions are taken by the delegated ServiceAccount instead of the user directly.


Message from the maintainers:

Love this enhancement proposal? Give it a 👍. We prioritise the proposals with the most 👍.

@dqsully dqsully added the type/feature Feature request label Aug 9, 2022
@alexec
Copy link
Contributor

alexec commented Aug 10, 2022

Is this fixed by #7193?

@dqsully
Copy link
Author

dqsully commented Aug 10, 2022

Sorta. We attach RoleBindings and ClusterRoleBindings based on group claims in an OIDC token, so we would need that to be implemented for the SubjectAccessReview mechanism. It also doesn't solve the Kubernetes access log problem, since in that PR, the Argo Workflows server ServiceAccount is still making the changes to Kubernetes.

With our Kubernetes config, the OIDC tokens that Argo Workflows could get through SSO are directly usable within Kubernetes without impersonation, so they can be used exactly like the current "client" auth mechanism. We would just need the ability for Argo Workflows to use those tokens directly instead of trying to do some impersonation or delegation mechanism.

@tooptoop4
Copy link
Contributor

@dqsully have u got multi-tenant working? what prevents team2 writing a yaml that uses team1's namespace/serviceacct?

@dqsully
Copy link
Author

dqsully commented Aug 10, 2022

Yes, we're using multi-tenant clusters. I'm not quite sure what you're referring to @tooptoop4, but our engineers currently access Kubernetes using our IdP, and their permissions are governed by RoleBindings that apply to one or more groups, mapping up to the k8s_groups claim in the OIDC token returned from our IdP. At the moment, we use one namespace prefix per team, and allow engineers in that team to do certain actions within their team's namespaces according to their role in that team. Any automated mechanisms that interact with Kubernetes are either global and managed by our infrastructure team, or given access to only a single namespace via a custom automation that creates a ServiceAccount per namespace with deployment permissions to that namespace.

We explicitly don't follow patterns like Jenkins X, where everything deployed to a cluster exists in a single repository, because we've dealt with the security nightmare that that can create at scale. We also don't create one Kubernetes cluster per team to avoid excess costs. We have a relatively large standard "runtime" of addons for our Kubernetes clusters, and so managing fewer clusters overall means less bootstrapping cost and less management overhead.

@tooptoop4
Copy link
Contributor

@dqsully what I mean is what prevents a user going into the Argo Workflows UI and submitting a new workflow yaml that has another team's serviceAccoutName and namespace in it?

@dqsully
Copy link
Author

dqsully commented Aug 11, 2022

In the existing "client" auth mode, my understanding is that the Argo Workflows server will use the client's bearer token to communicate with the Kubernetes API directly, e.g. for creating a workflow object. If that token is only authorized to create a workflow in a certain namespace, then the user is only allowed to run the workflow under a ServiceAccount in that same namespace. A workflow in one namespace can't run containers in a separate namespace, right?

@tooptoop4
Copy link
Contributor

@dqsully I mean just like

serviceAccountName: argo-server

@dqsully
Copy link
Author

dqsully commented Aug 11, 2022

Right, so in a cluster install of Argo Workflows, if a user isn't allowed to create a workflow in the same namespace as Argo Workflows itself, then they can't use any of the Argo Workflows ServiceAccounts.

@tooptoop4
Copy link
Contributor

on the argocd side i guess you use appProject to map between which git repos can use which namespaces ie https://github.com/argoproj/argo-cd/blob/v2.4.11/docs/operator-manual/project.yaml#L15-L19

@DingGGu
Copy link

DingGGu commented Dec 8, 2022

Kiali already support like this feature.

I had a good experience when configuring kiali's authentication.

Usually, OIDC is already configured for authentication in Kubernetes Cluster.

Kiali to use the same App as Cluster authentication, It will map to the RoleBinding defined in the cluster.
Thus, users can use the same privileges they can use in kubectl .

In my cases, there are hundreds of users in the cluster, so permissions are managed by scopes: groups of OIDC and RoleBinding is managed in the cluster.

You can set up with in kube-apiserver: --oidc-groups-claim

and Here is RoleBinding examples

subjects:
- kind: Group
   name: "frontend-admins"
   apiGroup: rbac.authorization.k8s.io

If Argo Workflow can follow this flow, it will be very helpful in using it.

@aaron-arellano
Copy link

Plus one on implementing this feature. We already leverage SubjectAccessReview to validate user access to a namespace and feel managing claims in OIDC would be too much as we do not own our OIDC but do own the RoleBindings, ServiceAccounts and Roles that get deployed to our cluster.

@agilgur5 agilgur5 changed the title SSO client auth mode SSO client auth mode -- use k8s OIDC directly Feb 23, 2024
@agilgur5 agilgur5 changed the title SSO client auth mode -- use k8s OIDC directly SSO auth mode -- use k8s OIDC directly Jul 10, 2024
@agilgur5
Copy link

Marking this as superseded by #12049, which has more upvotes (although this issue has some good details too)

@agilgur5 agilgur5 added the solution/superseded This PR or issue has been superseded by another one (slightly different from a duplicate) label Jul 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/sso-rbac solution/superseded This PR or issue has been superseded by another one (slightly different from a duplicate) type/feature Feature request
Projects
None yet
Development

No branches or pull requests

6 participants