Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValidatingAdmissionPolicy for C-0013 #11

Merged
merged 2 commits into from
Nov 12, 2024
Merged

Conversation

suhasgumma
Copy link
Contributor

Control C-0013

Related Resources: CronJob, DaemonSet, Deployment, Job, Pod, ReplicaSet, StatefulSet

Control Docs: https://hub.armosec.io/docs/c-0013

Control Rego: https://github.com/kubescape/regolibrary/blob/master/rules/non-root-containers/raw.rego

@suhasgumma
Copy link
Contributor Author

Hi @slashben, there is a mismatch between the documentation and the rego functionality.

This is what the rego actually does:

  • Check whether runAsUser is set to a non-zero value or runAsNonRoot is set to true in PodSecurityContext. If both of them result in false, rego checks the same in every ContainerSecurityContext. Even if one container returns false for both checks, the control fails.
  • Checks if allowPrivilegeEscalation is set to false in PodSecurityContext. If it is not set or set to true, rego checks the same in every ContainerSecurityContext. Even if one container did not set "allowPrivilegeEscalation" or set it to true, the control fails.

@suhasgumma
Copy link
Contributor Author

suhasgumma commented Jan 11, 2023

  • There is one more issue. PodSecurityContext does not support allowPrivilegeEscalation.
    For more details, see this issue: "allowPrivilegeEscalation" is not supported at Pod Scope regolibrary#186

  • Keeping that in mind, I've written CEL such that it directly checks if each container set the securityContext.allowPrivilegeEscalation to false. Even if one container did not set it, the control fails.

@slashben
Copy link
Collaborator

This is a very complex control, I will review a bit later

@Daniel-GrunbergerCA
Copy link
Collaborator

Hi @suhasgumma
Please note that the if and else statements are evaluated in order (https://www.openpolicyagent.org/docs/latest/policy-language/#else-keyword).
That means that first we check the container security context for the given fields, and only if they are not present there, we check the Pod security context. The reason for that being that the container security context takes precedence over the Pod security context (https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#securitycontext-v1-core).
This behavior needs to be reflected on your check as well. For example, it is possible that runAsNonRoot will be set to true in the Pod security context, and runAsUser will be set to non-zero in the container security context, and so on.

@suhasgumma
Copy link
Contributor Author

Thank you @Daniel-GrunbergerCA . ContainerSecurityContext takes precedence. Only if the values are not set, we should check in PodSecurityContext. I will implement the control according to this logic and get back to you.

Signed-off-by: Suhas Gumma <[email protected]>
Signed-off-by: Suhas Gumma <[email protected]>
@suhasgumma
Copy link
Contributor Author

Hi @Daniel-GrunbergerCA, this is what I have done:

  • Check if allowPreviligeEscalation is set to false in ContainerSecurityContext. Only if it is not set, check for it in PodSecurityContext. The control fails if it is not set to false.

  • Check either runAsNonRoot is set to true or runAsUser is non-zero. Only if they are not set in ContainerSecurityContext, fetch their values from PodSecurityContext if they are set there.

Copy link
Collaborator

@Daniel-GrunbergerCA Daniel-GrunbergerCA left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job! Sorry for the delay :)

@matthyx matthyx requested a review from slashben August 11, 2024 20:45
@matthyx
Copy link
Collaborator

matthyx commented Aug 11, 2024

@slashben can you have a look, Daniel did approve in the past, but a second review before merging would be good

@snowhanse
Copy link

@matthyx I came across this as I was looking at a similar problem, but wonder if it's not possible to simplify the validations to a single expression by using variables to standardise the podSpec, something like this:

  variables:
  # Extract podSpec from Pod, Deployment, ReplicaSet, DaemonSet, StatefulSet, Job, CronJob
  # CronJob has podSpec in object.spec.jobTemplate.spec
  - name: cronjobPodSpec
    expression: |
      object.kind == 'CronJob' ? object.spec.jobTemplate.spec : null
  # Pod is found directly in object.spec and all reminding kinds is found in object.spec.template.spec
  - name: otherPodSpec
    expression: |
      object.kind == 'Pod' ? object.spec : object.spec.template.spec
  # Pass the the correct podSpec based on the object kind
  - name: podSpec
    expression: |
      object.kind == 'CronJob' ? variables.cronjobPodSpec : variables.otherPodSpec

  validations:
    - expression: > 
        variables.podSpec.containers.all(container,
          (
            (
              has(container.securityContext) && 
              has(container.securityContext.allowPrivilegeEscalation) && 
              container.securityContext.allowPrivilegeEscalation == false
            ) ||
            (
              (
                !has(container.securityContext) || !has(container.securityContext.allowPrivilegeEscalation)
              ) &&
              (
                has(variables.podSpec.securityContext) && 
                has(variables.podSpec.securityContext.allowPrivilegeEscalation) && 
                variables.podSpec.securityContext.allowPrivilegeEscalation == false
              )
            )
          ) &&
          (
            (
              (
                has(container.securityContext) && 
                has(container.securityContext.runAsNonRoot) && 
                container.securityContext.runAsNonRoot == true
              ) ||
              (
                (
                  !has(container.securityContext) || !has(container.securityContext.runAsNonRoot)
                ) &&
                (
                  has(variables.podSpec.securityContext) && 
                  has(variables.podSpec.securityContext.runAsNonRoot) && 
                  variables.podSpec.securityContext.runAsNonRoot == true
                )
              )  
            ) ||
            (
              (
                has(container.securityContext) && 
                has(container.securityContext.runAsUser) && 
                container.securityContext.runAsUser != 0
              ) ||
              (
                (
                  !has(container.securityContext) || !has(container.securityContext.runAsUser)
                ) &&
                (
                  has(variables.podSpec.securityContext) && 
                  has(variables.podSpec.securityContext.runAsUser) && 
                  variables.podSpec.securityContext.runAsUser != 0
                )
              )  
            )
          )
        )
      message: "Workload contains container/s which have the capability to run as root! (see more at https://hub.armosec.io/docs/c-0013)"

@slashben slashben merged commit 1a7ae10 into kubescape:main Nov 12, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

5 participants