Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

source:generated within constraint.yaml causes gator verify to fail #3432

Closed
malexander2012 opened this issue Jun 25, 2024 · 10 comments · Fixed by #3650
Closed

source:generated within constraint.yaml causes gator verify to fail #3432

malexander2012 opened this issue Jun 25, 2024 · 10 comments · Fixed by #3650
Assignees
Labels
bug Something isn't working good first issue Good for newcomers
Milestone

Comments

@malexander2012
Copy link

malexander2012 commented Jun 25, 2024

What steps did you take and what happened:
I'm using expansionTemplates for the gatekeeper-library policies i'm importing. I'm explicitly setting spec.match.source: "Generated" on the constraint.yaml file. I am also using gator verify for testing. I'm having issues where I set the source: "Generated" and my gator verify fails. When i remove source: "Generated" from the constraint.yaml it passes.

Failed test:

> gator verify  opa/tests/...             
    --- FAIL: disallowed        (0.003s)
        unexpected number of violations: got 0 violations but want at least 1: got messages []
--- FAIL: forbidden-sysctls     (0.008s)
FAIL    opa/tests/forbidden-sysctls/suite.yaml  0.008s

What did you expect to happen:
I expected the gator verify opa/tests/... to pass.

constraint.yaml:

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPForbiddenSysctls
metadata:
  name: k8spspforbiddensysctls
spec:
  enforcementAction: warn
  match:
    excludedNamespaces:
      - gatekeeper
      - kube-system
    kinds:
      - apiGroups:
          - ''
        kinds:
          - Pod
    source: Generated
  parameters:
    allowedSysctls:
      - vm.max_map_count
    forbiddenSysctls: []

template.yaml

apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8spspforbiddensysctls
  annotations:
    metadata.gatekeeper.sh/title: Forbidden sysctls
    metadata.gatekeeper.sh/version: 1.0.0
    description: |
      Controls the `sysctl` profile used by containers. Corresponds to the
      `allowedUnsafeSysctls` and `forbiddenSysctls` fields in a PodSecurityPolicy.
      When specified, any sysctl not in the `allowedSysctls` parameter is considered to be forbidden.
      The `forbiddenSysctls` parameter takes precedence over the `allowedSysctls` parameter.
      For more information, see https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
spec:
  crd:
    spec:
      names:
        kind: K8sPSPForbiddenSysctls
      validation:
        openAPIV3Schema:
          type: object
          properties:
            allowedSysctls:
              type: array
              description: An allow-list of sysctls. `*` allows all sysctls not listed in the `forbiddenSysctls` parameter.
              items:
                type: string
            forbiddenSysctls:
              type: array
              description: A disallow-list of sysctls. `*` forbids all sysctls.
              items:
                type: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8spspforbiddensysctls

        import data.lib.exclude_update.is_update

        # Block if forbidden
        violation[{"msg": msg, "details": {}}] {
        	# spec.securityContext.sysctls field is immutable.
        	not is_update(input.review)

        	sysctl := input.review.object.spec.securityContext.sysctls[_].name
        	forbidden_sysctl(sysctl)
        	msg := sprintf("The sysctl %v is not allowed, pod: %v. Forbidden sysctls: %v", [sysctl, input.review.object.metadata.name, input.parameters.forbiddenSysctls])
        }

        # Block if not explicitly allowed
        violation[{"msg": msg, "details": {}}] {
        	not is_update(input.review)
        	sysctl := input.review.object.spec.securityContext.sysctls[_].name
        	not allowed_sysctl(sysctl)
        	msg := sprintf("The sysctl %v is not explicitly allowed, pod: %v. Allowed sysctls: %v", [sysctl, input.review.object.metadata.name, input.parameters.allowedSysctls])
        }

        # * may be used to forbid all sysctls
        forbidden_sysctl(_) {
        	input.parameters.forbiddenSysctls[_] == "*"
        }

        forbidden_sysctl(sysctl) {
        	input.parameters.forbiddenSysctls[_] == sysctl
        }

        forbidden_sysctl(sysctl) {
        	forbidden := input.parameters.forbiddenSysctls[_]
        	endswith(forbidden, "*")
        	startswith(sysctl, trim_suffix(forbidden, "*"))
        }

        # * may be used to allow all sysctls
        allowed_sysctl(_) {
        	input.parameters.allowedSysctls[_] == "*"
        }

        allowed_sysctl(sysctl) {
        	input.parameters.allowedSysctls[_] == sysctl
        }

        allowed_sysctl(sysctl) {
        	allowed := input.parameters.allowedSysctls[_]
        	endswith(allowed, "*")
        	startswith(sysctl, trim_suffix(allowed, "*"))
        }
      libs:
        - |
          package lib.exclude_update

          is_update(review) {
          	review.operation == "UPDATE"
          }

suite.yaml:

kind: Suite
apiVersion: test.gatekeeper.sh/v1alpha1
metadata:
  name: forbidden-sysctls
tests:
- name: forbidden-sysctls
  template: ../../general/forbidden-sysctls/template.yaml
  constraint: ../../general/forbidden-sysctls/constraint.yaml
  cases:
  - name: allowed
    object: samples/allowed.yaml
    assertions:
    - violations: no
  - name: disallowed
    object: samples/disallowed.yaml
    assertions:
    - violations: yes

disallowed.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: sample-container
spec:
  securityContext:
    sysctls:
      - name: vm.max_map_count
        value: "242144"
      - name: test_sysctl
        value: "65536"
  containers:
    - name: sample-container
      image: busybox:latest

allowed:

apiVersion: v1
kind: Pod
metadata:
  name: sample-container
spec:
  securityContext:
    sysctls:
      - name: vm.max_map_count
        value: "242144"
  containers:
    - name: sample-container
      image: busybox:latest

expansionTemplate

apiVersion: expansion.gatekeeper.sh/v1alpha1
kind: ExpansionTemplate
metadata:
  name: expand-deployments
spec:
  applyTo:
    - groups: ["apps"]
      kinds: ["DaemonSet", "Deployment", "ReplicaSet", "StatefulSet"]
      versions: ["v1"]
    - groups: [""]
      kinds: ["ReplicationController"]
      versions: ["v1"]
    - groups: ["batch"]
      kinds: ["Job"]
      versions: ["v1"]
  templateSource: "spec.template"
  generatedGVK:
    kind: "Pod"
    group: ""
    version: "v1"
  • Gatekeeper version: v3.15.1
  • Kubernetes version: (use kubectl version): v1.29.4
@malexander2012 malexander2012 added the bug Something isn't working label Jun 25, 2024
@ritazh
Copy link
Member

ritazh commented Jun 25, 2024

Thanks for reporting the issue.
Is the desire to block workload resources that generate pod resources? if so, does what you have work with gator test and does gatekeeper webhook validation work? if you use workload resources (e.g. deployment) as part of the test suite, does gator verify work as intended?

@malexander2012
Copy link
Author

malexander2012 commented Jun 25, 2024

Thanks for reporting the issue. Is the desire to block workload resources that generate pod resources? if so, does what you have work with gator test and does gatekeeper webhook validation work? if you use workload resources (e.g. deployment) as part of the test suite, does gator verify work as intended?

The desire is to be able to run the expansionTemplate ONLY on Generated resources by explicitly setting the source: "Generated" on the constraint.yaml . When i was testing with gator test it did work the way it should. here's the test I ran:

cat  << EOF | gator test -f opa/general/forbidden-sysctls -f opa/general/expansion
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      securityContext:
        capabilities:
          add:
          - SYS_ADMIN
        sysctls:
        - name: test
          value: "1024"
      containers:
        - name: hello
          image: busybox
          command: ["sh", "-c"]
          args:
            - sleep 36010
EOF
apps/v1/Deployment hello: ["k8spspforbiddensysctls"] Message: "[Implied by expand-deployments] The sysctl test is not explicitly allowed, pod: hello-pod. Allowed sysctls: [\"vm.max_map_count\"]"

@ritazh
Copy link
Member

ritazh commented Jun 25, 2024

The source field on the match API, present in the Mutation and Constraint kinds, specifies if the config should match Generated ( i.e. fake) resources, Original resources, or both. The source field is an enum which accepts the following values:
Generated – the config will only apply to expanded resources, and will not apply to any real resources on the cluster

https://open-policy-agent.github.io/gatekeeper/website/docs/expansion

In your test suite, the pod yaml is not a fake resource.

When you remove Generated from the constraint resource, it worked because:

All – the config will apply to both Generated and Original resources. This is the default value.

@malexander2012
Copy link
Author

The source field on the match API, present in the Mutation and Constraint kinds, specifies if the config should match Generated ( i.e. fake) resources, Original resources, or both. The source field is an enum which accepts the following values:
Generated – the config will only apply to expanded resources, and will not apply to any real resources on the cluster

https://open-policy-agent.github.io/gatekeeper/website/docs/expansion

In your test suite, the pod yaml is not a fake resource.

When you remove Generated from the constraint resource, it worked because:

All – the config will apply to both Generated and Original resources. This is the default value.

The source field on the match API, present in the Mutation and Constraint kinds, specifies if the config should match Generated ( i.e. fake) resources, Original resources, or both. The source field is an enum which accepts the following values:
Generated – the config will only apply to expanded resources, and will not apply to any real resources on the cluster

https://open-policy-agent.github.io/gatekeeper/website/docs/expansion

In your test suite, the pod yaml is not a fake resource.

When you remove Generated from the constraint resource, it worked because:

All – the config will apply to both Generated and Original resources. This is the default value.

ok i changed the allowed and disallowed.yaml to a deployment and its still failing:

> gator verify opa/tests/... 
    --- FAIL: disallowed        (0.003s)
        unexpected number of violations: got 0 violations but want at least 1: got messages []
--- FAIL: forbidden-sysctls     (0.009s)
FAIL    opa/tests/forbidden-sysctls/suite.yaml  0.009s

Error: FAIL

allowed.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      securityContext:
        sysctls:
        - name: vm.max_map_count
          value: "242144"
      containers:
        - name: hello
          image: busybox
          command: ["sh", "-c"]
          args:
            - sleep 36010

disallowed.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      securityContext:
        capabilities:
          add:
          - SYS_ADMIN
        sysctls:
        - name: test
          value: "1024"
      containers:
        - name: hello
          image: busybox
          command: ["sh", "-c"]
          args:
            - sleep 36010

constraint.yaml

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPForbiddenSysctls
metadata:
  name: k8spspforbiddensysctls
spec:
  enforcementAction: warn
  match:
    excludedNamespaces:
      - gatekeeper
      - kube-system
    kinds:
      - apiGroups:
          - ''
        kinds:
          - Pod
    source: Generated
  parameters:
    allowedSysctls:
      - vm.max_map_count
    forbiddenSysctls: []

@malexander2012
Copy link
Author

malexander2012 commented Jun 25, 2024

@ritazh Is there way to inform gator verify that there is an expansion thats needed?

@ritazh
Copy link
Member

ritazh commented Jun 26, 2024

I don't see it in gator verify if we were to add it, it would be somewhere here:

func (r *Runner) runReview(ctx context.Context, newClient func() (gator.Client, error), suiteDir string, tc *Case) (*types.Responses, error) {

to add something like:
er, err := expand.NewExpander(objs)

@malexander2012
Copy link
Author

malexander2012 commented Jun 26, 2024

@ritazh - Thank you for your help with this. Then I would like to request this as a feature.

@malexander2012
Copy link
Author

adding comment to keep alive

@JaydipGabani JaydipGabani added the good first issue Good for newcomers label Aug 14, 2024
@ritazh ritazh added this to the v3.18.0 milestone Aug 16, 2024
@David-Jaeyoon-Lee
Copy link
Contributor

David-Jaeyoon-Lee commented Aug 20, 2024

I will read through and start working on this issue

Copy link

stale bot commented Nov 4, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Nov 4, 2024
@JaydipGabani JaydipGabani removed the stale label Nov 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants