Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Policy engine to consider more workload attributes when deciding on a connection #17

Closed
zivnevo opened this issue Sep 21, 2023 · 5 comments · Fixed by #665
Closed

Policy engine to consider more workload attributes when deciding on a connection #17

zivnevo opened this issue Sep 21, 2023 · 5 comments · Fixed by #665
Assignees
Labels
enhancement New feature or request policies Issues related to policies and policy engine
Milestone

Comments

@zivnevo
Copy link
Collaborator

zivnevo commented Sep 21, 2023

No description provided.

@zivnevo zivnevo added enhancement New feature or request policies Issues related to policies and policy engine labels Sep 21, 2023
@zivnevo zivnevo added this to the Backlog - Not Committed milestone Sep 21, 2023
@zivnevo zivnevo self-assigned this Sep 21, 2023
@elevran elevran removed this from the Backlog - Not Committed milestone Oct 23, 2023
@elevran
Copy link
Collaborator

elevran commented Oct 23, 2023

References #86

@zivnevo
Copy link
Collaborator Author

zivnevo commented Oct 31, 2023

Some implementation issues I'm currently struggling with:

  1. There is currently no way to get the attributes of the requesting workload on the remote service side. We only know the name of the requesting peer. This means ingress connection requests cannot be evaluated the same way as the corresponding egress connection requests.
  2. As for the remote service, we currently have no attributes other than its name and the name of its peer. However, the name is available on both sides of the connection.
  3. Load-balancing policies are currently only based on names: the name of the requesting workload and the name of the remote service. This might be fine, but still requires the requesting workload to have a label with some predefined special key (currently app).

My plans for the short term:

  1. Only accept access policies with attributes that refer to the src and dst peers or to the dst service name - reject all other policies
  2. As for load-balancing policies - remain with the current scheme, but maybe use a slightly less generic label key.

For the long term we need to find a way to share the attributes of the requesting workload and the remote service between peers.

@elevran , @kfirtoledo @orozery @praveingk - your thoughts?

@kfirtoledo
Copy link
Collaborator

kfirtoledo commented Oct 31, 2023

You are right about the issue of the source service In the remote peer. But I think is really useful that we can have control at the service level of which service can access the remote peer/service. I think also that's what @huang195 is doing.

@elevran
Copy link
Collaborator

elevran commented Oct 31, 2023

  1. There is currently no way to get the attributes of the requesting workload on the remote service side. We only know the name of the requesting peer. This means ingress connection requests cannot be evaluated the same way as the corresponding egress connection requests.

I think we would want to support metadata/attribute exchange as part of the connection set-up request. For example, we can send the requesting workload attributes via JWT claims in an Authorization header.
There is still asymmetry as the server side attributes are not known at the client side. We could consider adding attributes to service Import (e.g., the set of labels on the k8s Service object may make sense).

  1. Load-balancing policies are currently only based on names: the name of the requesting workload and the name of the remote service. This might be fine, but still requires the requesting workload to have a label with some predefined special key (currently app).

We need to have a clear definition of the attribute model and then make sure we have them present. A single label with a name is neither flexible nor sufficient, IMO. Attributes can include a service name, but likely more than that (e.g., peer attributes, workload attributes, ...)

My plans for the short term:

  1. Only accept access policies with attributes that refer to the src and dst peers or to the dst service name - reject all other policies
  2. As for load-balancing policies - remain with the current scheme, but maybe use a slightly less generic label key.

Not sure I follow what the suggestion is. I think source and destination peers without (at least) service names does not enable some of the use cases we want to support. It can be done as a follow up PR, if we're unsure what the end game is and want to support some better interim behavior.

For the long term we need to find a way to share the attributes of the requesting workload and the remote service between peers.

Agree. Can you take a stab at defining a design proposal for attribute model and it use in the different policies?

@zivnevo
Copy link
Collaborator Author

zivnevo commented Oct 31, 2023

Not sure I follow what the suggestion is. I think source and destination peers without (at least) service names does not enable some of the use cases we want to support. It can be done as a follow up PR, if we're unsure what the end game is and want to support some better interim behavior.

What I suggest is to only support the peer-name attribute in the from field of the policies. Trying to add a policy with any other attribute in the from field will fail with a "not yet supported" error message. Similarly, in the to field only peer-name and service-name attributes will be supported. This is of course temporary until we have a way to obtain client attributes at the remote peer, and service attributes at the local peer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request policies Issues related to policies and policy engine
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants