-
Notifications
You must be signed in to change notification settings - Fork 7
Conversation
apiVersion: networking.istio.io/v1alpha3 | ||
kind: EnvoyFilter | ||
metadata: | ||
name: kuadrant-kuadrant-gwapi-gateway-wasm-ratelimits |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and labels? to which Gateway is being applied?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have two gateways so didn't care about narrowing it down to a single gateway. Also, I noticed the name is wrong here...
@@ -28,6 +28,7 @@ kubectl apply -f utils/local-deployment/istio-manifests/autogenerated/Base/Base. | |||
kubectl apply -f utils/local-deployment/istio-manifests/autogenerated/Base/Pilot/Pilot.yaml | |||
kubectl apply -f utils/local-deployment/istio-manifests/autogenerated/Base/Pilot/IngressGateways/IngressGateways.yaml | |||
kubectl apply -n "${KUADRANT_NAMESPACE}" -f utils/local-deployment/istio-manifests/kuadrant-gateway.yaml | |||
kubectl apply -n "${KUADRANT_NAMESPACE}" -f utils/local-deployment/limitador-cluster.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why remove the EF related to the cluster? This cannot be done by the operator. Only the control plane (kuadrant-controller) knows about which gateways are being affected by the rate limit policies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We talked about this during RLP slides work. Consider this as a pre-requisite before Kuadrant can work properly and we should focus on fixing the issue in the Limitador.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the time being we can remove it and leave it as a pre-requisite. But I think it is a limitation.
1.- It is error prone. Kuadrant could be configuring envoy with wasm module for rate limiting and would not work if the limitador service is not added to the envoy's clusters.
2.- Gateways have a lifecycle. They live and die. Having the kuadrant managing limitador's service configuration into envoy's conf, ensures that binding is working
3.- As a potential feature, kuadrant could provide a default limitador service. But that default service could be overridden by a custom limitador externally managed. And the location of the custom limitador would be ... in the RateLimitPolicy?? in the kuadrant CR?
So what would be the set-up procedure?
- Deploy istio, API gateway resources, ingress gateway
- Deploy kuadrant operator (limitador and authorino operator are also deployed as dependencies)
- Create the Kuadrant CR. This is when the operator deploys limitador service in the same namespace of the kuadrant CR.
- Configure the gateway with the location of the limitador service. Who does this taks? Cluster operator? Kuadrant operator? If it is the kuadrant operator, how does the kuadrant operator know about the ingress gateway (or envoy) namespace?
Again, I am not against this change for now. It is a simplification and I love making KISS things. But clearly it creates a empty gap that it is not clear to me how and how is fulfilled.
cc @maleck13
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rahulanand16nov I also did not understand about "fixing the issue in the Limitador". This is about configuring limitador service in envoy, isn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have that EnvoyFilter in the first place due to: Kuadrant/limitador#53
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The limitador service is automatically added to the envoy cluster by istio. Ok. I assume the following. From the WASM module, when you need to open a connection, you need to use the cluster name as reference to define the connection destination. Right? I was assuming that because I guess it is envoy that opens the connection on behalf of the wasm module. If so, how does the WASM filter know about the cluster name? Does the kuadrant controller need to figure out something like outbound|8081||limitador.rahul-test.svc.cluster.local
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, we need a reference name to be passed to Envoy from the WASM filter. The module is unaware of the cluster name so that is something that needs to be provided by the kuadrant-controller/operator.
We'll provide outbound|8081||limitador.rahul-test.svc.cluster.local
but when request goes to limitador, it rejects the request so we need to fix it.
Can we close #69 ? |
Closed |
@@ -28,6 +28,7 @@ kubectl apply -f utils/local-deployment/istio-manifests/autogenerated/Base/Base. | |||
kubectl apply -f utils/local-deployment/istio-manifests/autogenerated/Base/Pilot/Pilot.yaml | |||
kubectl apply -f utils/local-deployment/istio-manifests/autogenerated/Base/Pilot/IngressGateways/IngressGateways.yaml | |||
kubectl apply -n "${KUADRANT_NAMESPACE}" -f utils/local-deployment/istio-manifests/kuadrant-gateway.yaml | |||
kubectl apply -n "${KUADRANT_NAMESPACE}" -f utils/local-deployment/limitador-cluster.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the time being we can remove it and leave it as a pre-requisite. But I think it is a limitation.
1.- It is error prone. Kuadrant could be configuring envoy with wasm module for rate limiting and would not work if the limitador service is not added to the envoy's clusters.
2.- Gateways have a lifecycle. They live and die. Having the kuadrant managing limitador's service configuration into envoy's conf, ensures that binding is working
3.- As a potential feature, kuadrant could provide a default limitador service. But that default service could be overridden by a custom limitador externally managed. And the location of the custom limitador would be ... in the RateLimitPolicy?? in the kuadrant CR?
So what would be the set-up procedure?
- Deploy istio, API gateway resources, ingress gateway
- Deploy kuadrant operator (limitador and authorino operator are also deployed as dependencies)
- Create the Kuadrant CR. This is when the operator deploys limitador service in the same namespace of the kuadrant CR.
- Configure the gateway with the location of the limitador service. Who does this taks? Cluster operator? Kuadrant operator? If it is the kuadrant operator, how does the kuadrant operator know about the ingress gateway (or envoy) namespace?
Again, I am not against this change for now. It is a simplification and I love making KISS things. But clearly it creates a empty gap that it is not clear to me how and how is fulfilled.
cc @maleck13
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is breaking our current configuration of envoy using EnvoyFilter.
Merging it but, as agreed, the envoyfilter with the configuration of the limitador service should be added back.
As soon as Kuadrant/limitador#73 is merged, I'll make the appropriate change to this PR and merge it. |
Since Kuadrant/limitador#73 is closed. We can merge this PR for now and will re-add |
This PR works on #127
1.16.x
to1.17.x
which was required by different dependencies.EnvoyFilter
withWasmPlugin
resource.|
(pipe) character in authority header is auto-rejected by Limitador limitador#53 is fixed.AuthPolicy
structs until: Proposingv0.9.1
Release kubernetes-sigs/controller-tools#667 is closed.Verification
Just try any one of the user guides with rate-limiting and it will work 🙏
Dev notes
The new istio extension requires -> istio.io/api v0.0.0-20220413180505-1574de06b7bd -> github.com/go-logr/logr v1.2.0 and this logr conflicts with controller-runtime 0.10. Controller-runtime needs to be updated to 0.11. Both controller-runtime 0.11 and istio.io/api v0.0.0-20220413180505-1574de06b7bd -> requires golang 1.17