-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Network Policy rke2-flannel-host-networking when cis-1.23 and calico #5315
Comments
Yes, it's normal. Ref:
Do you have any further questions about it? |
Hello, Then, when I read the policy deployed: apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: rke2-flannel-host-networking
namespace: kube-system
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress I read "Allows all pods in namespace kube-system to receive traffic from all namespaces, pods and IP addresses on all ports". And according to the ref you send me it's written: "The NetworkPolicy used will only allow pods within the same namespace to talk to each other. The notable exception to this is that it allows DNS requests to be resolved". Maybe something like: apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-same-namespace
namespace: kube-system
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
policyTypes:
- Ingress Wich could be read as "Allows all pods in namespace kube-system to receive traffic from all pods in the same namespace on all ports (denies inbound traffic to all pods in namespace kube-system from other namespaces)" I use this tool to help me to be sure: |
Also, there is a default policy that is created that respect what is written: apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-network-policy
namespace: kube-system
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
policyTypes:
- Ingress Maybe the flannel one could be removed ? |
Hmm, that does look suspect. @Oats87 and @manuelbuil could you take a look at this? I suspect that this controller shouldn't run when canal is not the active CNI. |
Hello, will you also backport this to others release ? |
Yes, we backport everything to all active branches. The policy will not be changed or removed on existing clusters, as its removal could cause unexpected outages. New clusters will not get this policy. |
Thanks for your answer, sorry to insist but at least, when I have updated my release and if I delete manually this policy, nothing will recreate it ? |
correct. after upgrading to a fixed release. |
Ah perfect, thanks again !! |
Hello, I have just tried the new release (v1.26.14-rc1+rke2r1 and v1.27.11-rc1+rke2r1) without the wide open rke2-flannel-host-networking Netwok Policy. apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: rke2-ingress-nginx-controller
namespace: kube-system
spec:
podSelector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: rke2-ingress-nginx
app.kubernetes.io/name: rke2-ingress-nginx
ingress:
- ports:
- protocol: TCP
port: webhook
policyTypes:
- Ingress
|
@albundy83 I'm confused why you would need that. The policy you suggested appears to be granting access to the nginx validating webhook port? There shouldn't be anything other than the apiserver hitting that. Can you provide more information on what you're seeing getting blocked without that policy? |
Specifically, this policy should already grant access to the ingress itself on 80/443. The webhook should not be accessed directly by clients; it is only queried by the apiserver. Lines 102 to 137 in 3b1d700
|
Well, not sure what you mean by apiserver, but each time I try to create an ingrees, I have the following error: Here for example: apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hubble-ui
namespace: kube-system
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- hubble.my-loving-rke2-cluster.fr
secretName: hubble.my-loving-rke2-cluster.fr
rules:
- host: hubble.my-loving-rke2-cluster.fr
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hubble-ui
port:
name: http
I have also this error when I deploy HelmChart object that contains ingress stuff, helm-install-xxx Job never succeed. |
ok, so yeah the problem is that the apiserver is being blocked from accessing the webhook. That makes sense. |
Is there some improvement we can do for the Policy ? |
Closing issue after validation. Network Policy rke2-flannel-host-networking does not exists when cni other than canal is used and network policy for ingress controller to access webhook is fixed in rc2 Validated using rke2 version v1.29.2-rc2+rke2r1Environment DetailsInfrastructure Node(s) CPU architecture, OS, and Version:
Cluster Configuration: Config.yaml:
Steps to reproduce
Validation results:
Network Policy rke2-flannel-host-networking exists when cni: canal (default cni) is used
Network Policy rke2-flannel-host-networking does not exist when cni: cilium is used
|
Environmental Info:
RKE2 Version:
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
3 servers, 3 agents
Installed with lablabs ansible-role release 1.28.0
Describe the bug:
When I enable
cis-1.23
profile and usecalico
as cni, I have the following network policy that is created in namespacekube-system
kube-public
anddefault
Steps To Reproduce:
Expected behavior:
Can you explain me if it's normal ?
Also, I'm not able to delete it, as "something" recreate it ...
Actual behavior:
Additional context / logs:
The text was updated successfully, but these errors were encountered: