Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(appProtocol): ensure appProtocol can be valid without invalidating probes #276

Merged
merged 1 commit into from
Dec 5, 2024

Conversation

rawkode
Copy link
Contributor

@rawkode rawkode commented Nov 26, 2024

The appProtocol and the named ports for probes are very different things. The PR that added this together was didn't consider the valid values for appProtocol and how these aren't valid names for ports.

As per the Kubernetes documentation, the appProtocol:

kubectl explain svc.spec.ports.appProtocol
KIND:       Service
VERSION:    v1

FIELD: appProtocol <string>

DESCRIPTION:
    The application protocol for this port. This is used as a hint for
    implementations to offer richer behavior for protocols that they understand.
    This field follows standard Kubernetes label syntax. Valid values are
    either:

    * Un-prefixed protocol names - reserved for IANA standard service names (as
    per RFC-6335 and https://www.iana.org/assignments/service-names).

    * Kubernetes-defined prefixed names:
      * 'kubernetes.io/h2c' - HTTP/2 prior knowledge over cleartext as described
    in
    https://www.rfc-editor.org/rfc/rfc9113.html#name-starting-http-2-with-prior-
      * 'kubernetes.io/ws'  - WebSocket over cleartext as described in
    https://www.rfc-editor.org/rfc/rfc6455
      * 'kubernetes.io/wss' - WebSocket over TLS as described in
    https://www.rfc-editor.org/rfc/rfc6455

    * Other protocols should use implementation-defined prefixed names such as
    mycompany.com/my-custom-protocol.

Definition of Ready

  • I am happy with the code
  • Short description of the feature/issue is added in the pr description
  • PR is linked to the corresponding user story
  • Acceptance criteria are met
  • All open todos and follow ups are defined in a new ticket and justified
  • Deviations from the acceptance criteria and design are agreed with the PO and documented.
  • No debug or dead code
  • My code has no repetitions
  • Documentation/examples are up-to-date
  • All non-functional requirements are met
  • If possible, the test configuration is adjusted so acceptance tests cover my changes

@hifabienne hifabienne added the os-contribution This is a contribution from our open-source community label Nov 26, 2024
@fforootd fforootd requested a review from eliobischof November 26, 2024 12:26
Copy link
Member

@eliobischof eliobischof left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this PR. The changes make sense to me.
However, I think they could potentially break for example custom ingresses that select the service port by name, is this correct?

It's basically not an issue to release a new major, but we should add a migration path to the README.md. Do you think there are other things that could break @rawkode?

@nidomiro
Copy link

nidomiro commented Dec 5, 2024

I'm currently working on getting Zitadel to work with the Gateway API, backed by traefik. I exactly need this change.

My problem is, that traefik complains, that it cannot handle the appProtocol http2 of the service, but If I change the service.protocol to kubernetes.io/h2c the helm-chart fails.
When I manually change the appProtocol of the service to kubernetes.io/h2c after a deployment with the defaults, everything works as expected.
Sadly this is no valid workaround for me, as I'm using flux.

@rawkode
Copy link
Contributor Author

rawkode commented Dec 5, 2024

My workaround for Flux:

apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: zitadel
spec:
  interval: 72h
  timeout: 5m
  chart:
    spec:
      chart: zitadel
      version: '8.6.*'
      sourceRef:
        kind: HelmRepository
        name: zitadel
      interval: 5m
  releaseName: zitadel
  install:
    remediation:
      retries: 3
  upgrade:
    remediation:
      retries: 3
  test:
    enable: true
  # https://github.com/zitadel/zitadel-charts/pull/276
  postRenderers:
    - kustomize:
        patches:
          - target:
              version: v1
              kind: Service
              name: zitadel
            patch: |
              - op: replace
                path: /spec/ports/0/appProtocol
                value: kubernetes.io/h2c

@rawkode
Copy link
Contributor Author

rawkode commented Dec 5, 2024

Thanks for this PR. The changes make sense to me. However, I think they could potentially break for example custom ingresses that select the service port by name, is this correct?

It's basically not an issue to release a new major, but we should add a migration path to the README.md. Do you think there are other things that could break @rawkode?

It's a break for anyone that isn't using the Ingress object provisioned by the chart, yes.

As a safer resolution, we could roll a new variable for appProtocol and isolate it completely?

@eliobischof
Copy link
Member

As a safer resolution, we could roll a new variable for appProtocol and isolate it completely?

Let's do that. It would also nicely align with the k8s terminology.

@rawkode
Copy link
Contributor Author

rawkode commented Dec 5, 2024

Ready for you @eliobischof

Copy link
Member

@eliobischof eliobischof left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks fixing this @rawkode 🙏
If you'd like to have a small gift, please send us a mail to [email protected]

@eliobischof eliobischof enabled auto-merge (squash) December 5, 2024 10:13
@eliobischof eliobischof merged commit 680cf0b into zitadel:main Dec 5, 2024
9 checks passed
@rawkode rawkode deleted the fix/appProtocol branch December 5, 2024 10:19
@PurseChicken
Copy link
Contributor

This change broke my environments using gateway-api. Service export \ import attach to the service which configures the Load Balancers in GKE.

The error was "Client sent an HTTP request to an HTTPS server."

I am investigating how to fix using gateway-api MultiClusterIngress and will report back my findings.

@rawkode
Copy link
Contributor Author

rawkode commented Dec 11, 2024

This change broke my environments using gateway-api. Service export \ import attach to the service which configures the Load Balancers in GKE.

The error was "Client sent an HTTP request to an HTTPS server."

I am investigating how to fix using gateway-api MultiClusterIngress and will report back my findings.

The chart doesn't support the Gateway API, so that's a little strange.

Can you share your additional manifest? Happy to help debug.

@PurseChicken
Copy link
Contributor

PurseChicken commented Dec 11, 2024

It appears all I had to do to fix my issue was to change .Values.zitadel.service.protocol to .Values.zitadel.service.appProtocol.

Because appProtocol was not specified, it was changing the manifest to "appProtocol: kubernetes.io/h2c" which caused the break. Using appProtocol in values now properly set appProtocol: http2 in the service manifest like before.

Old:

  service:
    type: ClusterIP
    port: 8080
    protocol: http2
    annotations: {}

New:

  service:
    type: ClusterIP
    port: 8080
    appProtocol: http2
    annotations: {}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
os-contribution This is a contribution from our open-source community
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants