Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CNI chart docs tweaks #324

Merged
merged 1 commit into from
Feb 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
87 changes: 56 additions & 31 deletions docs/networking/basic_network_options.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,29 @@
title: Network Options
---

RKE2 requires a CNI plugin to connect pods and services. The Canal CNI plugin is the default but all CNI plugins are supported. All CNI
plugins get installed via a helm chart after the main components are up and running and can be customized by modifying the helm chart options.
Kubernetes requires installation of one or more CNI Plugins to provide Pod networking. RKE2 bundles four primary CNI Plugins: Canal, Cilium, Calico, and Flannel. Only Calico and Flannel support Microsoft Windows. RKE2 also includes Multus as a secondary CNI Plugin, which must be enabled alongside a primary CNI Plugin. For more information, see the [Multus and SR-IOV](multus_sriov.md) documentation.

Canal is the default CNI Plugin, but all bundled plugins are supported. Bundled CNI Plugins are installed via Helm chart, and can be customized by deploying a HelmChartConfig with additional chart values. For more information on using HelmChartConfig resources, see the [Helm Integration](../helm.md) documentation, and the CNI-specific examples provided below.

## Install a CNI plugin
## Select a CNI Plugin

RKE2 integrates with four different CNI plugins: Canal, Cilium, Calico and Flannel. Note that only Calico and Flannel are options for RKE2 deployments with Windows nodes.
Use the `cni` [configuration file key](../install/configuration.md) to select the CNI Plugin you wish to use. If you do not want to use any of the bundled CNI Plugins, you can set `cni` to `none`. Note that nodes will remain NotReady and be tainted unschedulable until a CNI Plugin is installed.

The next tabs inform how to deploy each CNI plugin and override the default options:
```yaml
# /etc/rancher/rke2/config.yaml
cni: canal
```

Bundled CNI Plugins are provided as AddOns that deploy a HelmChart resource, as described in the [Helm Integration](../helm.md) documentation. CNI Plugin charts are named `rke2-<CNI-PLUGIN-NAME>` and can be found in the `kube-system` namespace.

To customize the Helm chart values for a bundled CNI Plugin chart, you must create a HelmChartConfig resource that matches the name and namespace of its corresponding HelmChart. See the tabs below for examples of customizing the chart values for each of the bundled CNI Plugins.

Default chart values can be found by browsing the [RKE2 charts repository](https://github.com/rancher/rke2-charts/tree/main/charts), and referencing `values.yaml` for the version of the chart bundled with your RKE2 version.

<Tabs groupId = "CNIplugin" queryString>
<TabItem value="Canal CNI plugin" default>
<TabItem value="Canal CNI Plugin" default>

Canal means using Flannel for inter-node traffic and Calico for intra-node traffic and network policies. By default, it will use vxlan encapsulation to create an overlay network among nodes. Canal is deployed by default in RKE2 and thus nothing must be configured to activate it. To override the default Canal options you should create a HelmChartConfig resource. The HelmChartConfig resource must match the name and namespace of its corresponding HelmChart. For example to override the flannel interface, you can apply the following config:
Canal uses Flannel for inter-node traffic and Calico for intra-node traffic and network policies. By default, it will use vxlan encapsulation to create an overlay network among nodes. For example, to override the flannel interface, you can apply the following chart values:

```yaml
# /var/lib/rancher/rke2/server/manifests/rke2-canal-config.yaml
Expand All @@ -31,7 +40,7 @@ spec:
iface: "eth1"
```

Starting with RKE2 v1.23 it is possible to use flannel's [wireguard backend](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#wireguard) for in-kernel WireGuard encapsulation and encryption ([Users of kernels < 5.6 need to install a module](https://www.wireguard.com/install/)). This can be achieved using the following config:
Starting with RKE2 v1.23 it is possible to use flannel's [wireguard backend](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#wireguard) for in-kernel WireGuard encapsulation and encryption ([Users of kernels < 5.6 need to install a module](https://www.wireguard.com/install/)). This can be achieved using the following chart values:

```yaml
# /var/lib/rancher/rke2/server/manifests/rke2-canal-config.yaml
Expand Down Expand Up @@ -59,12 +68,12 @@ Canal requires the iptables or xtables-nft package to be installed on the node.
Canal is currently not supported on clusters with Windows nodes.
:::

Please check [Known issues and Limitations](../known_issues.md) if you experience IP allocation problems
Please check [Known issues and Limitations](../known_issues.md) if you experience IP allocation problems.

</TabItem>
<TabItem value="Cilium CNI plugin" default>
<TabItem value="Cilium CNI Plugin" default>

To deploy Cilium, pass `cilium` as the value of the `--cni` flag. Ensure that the nodes have the right required kernel version (>= 4.9.17) and they meet the [requirements](https://docs.cilium.io/en/stable/operations/system_requirements/). To override the default options, please use a HelmChartConfig resource. The HelmChartConfig resource must match the name and namespace of its corresponding HelmChart. For example, to enable eni:
When using Cilium, you must ensure that nodes have a supported kernel version (>= 4.9.17) and they meet the [requirements](https://docs.cilium.io/en/stable/operations/system_requirements/). To override the default options, please use a HelmChartConfig resource. The HelmChartConfig resource must match the name and namespace of its corresponding HelmChart. For example, to enable eni:

```yaml
# /var/lib/rancher/rke2/server/manifests/rke2-cilium-config.yaml
Expand All @@ -82,7 +91,7 @@ spec:

For more information about values available in the Cilium chart, please refer to the [rke2-charts repository](https://github.com/rancher/rke2-charts/blob/main/charts/rke2-cilium/rke2-cilium/1.14.400/values.yaml)

Cilium includes advanced features to fully replace kube-proxy and implement the routing of services using eBPF instead of iptables. It is not recommended to replace kube-proxy by Cilium if your kernel is not v5.8 or newer, as important bug fixes and features will be missing. To activate this mode, deploy rke2 with the flag `--disable-kube-proxy` and the following cilium configuration:
Cilium includes advanced features to fully replace kube-proxy and implement the routing of services using eBPF instead of iptables. It is not recommended to replace kube-proxy by Cilium if your kernel is not v5.8 or newer, as important bug fixes and features will be missing. To activate this mode, deploy rke2 with `disable-kube-proxy: true` in the configuration file, and the following chart values:

```yaml
# /var/lib/rancher/rke2/server/manifests/rke2-cilium-config.yaml
Expand All @@ -95,14 +104,14 @@ metadata:
spec:
valuesContent: |-
kubeProxyReplacement: true
k8sServiceHost: <KUBE_API_SERVER_IP>
k8sServicePort: <KUBE_API_SERVER_PORT>
k8sServiceHost: "localhost"
k8sServicePort: "6443"
```

For more information, please check the [upstream docs](https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/)

Cilium includes also an observability platform called [Hubble](https://docs.cilium.io/en/stable/overview/intro/#what-is-hubble)
To enable Hubble the following configuration is required:
To enable Hubble, use the following chart values:

```yaml
# /var/lib/rancher/rke2/server/manifests/rke2-cilium-config.yaml
Expand All @@ -123,12 +132,12 @@ spec:
```

:::warning
Cilium is currently not supported in the Windows installation of RKE2
Cilium is currently not supported on Windows.
:::

</TabItem>
<TabItem value="Calico CNI plugin" default>
To deploy Calico as the CNI plugin for RKE2 pass `calico` as the value of the `--cni` flag. To override the default options, please use a HelmChartConfig resource. The HelmChartConfig resource must match the name and namespace of its corresponding HelmChart. For example, to change the mtu:
<TabItem value="Calico CNI Plugin" default>
For example, to change the interface MTU, you can use the following chart values:

```yaml
# /var/lib/rancher/rke2/server/manifests/rke2-calico-config.yaml
Expand All @@ -145,7 +154,7 @@ spec:
mtu: 9000
```

Because of a kernel bug in versions previous to 5.7, by default we are disabling the checksum offload done by the kernel. That config caps TCP performance to ~2.5Gbps. If you require higher throughput and have a kernel version greater than 5.7, you can enable the checksum offloading by using the following HelmChartConfig:
Because of a kernel bug in versions previous to 5.7, Calico disables hardware checksum offload. That config caps TCP performance to ~2.5Gbps. If you require higher throughput and have a kernel version greater than 5.7, you can enable the checksum offloading by using the following HelmChartConfig:

```yaml
# /var/lib/rancher/rke2/server/manifests/rke2-calico-config.yaml
Expand All @@ -168,51 +177,67 @@ Calico requires the iptables or xtables-nft package to be installed on the node
:::

</TabItem>
<TabItem value="Flannel CNI plugin" default>
Starting with RKE2 2024 Feb release (v1.29.2, v1.28.7, v1.27.11, v1.26.14), Flannel can be deployed as the CNI plugin. To do so, pass `flannel` as the value of the `--cni` flag.

<TabItem value="Flannel CNI Plugin" default>
:::note
Only vxlan backend is supported at this point
Flannel is available as of February 2024 releases: v1.29.2, v1.28.7, v1.27.11, v1.26.14.
Only the `vxlan` backend is supported.
:::

For example, to change the interface MTU, you can use the following chart values:

```yaml
# /var/lib/rancher/rke2/server/manifests/rke2-flannel-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-flannel
namespace: kube-system
spec:
valuesContent: |-
flannel:
mtu: 9000
```

:::warning
Flannel does not support network policies. Therefore, it is not recommended for hardened installations
Flannel does not support network policies. Therefore, it is not recommended for hardened installations.
:::

:::warning
Flannel support in RKE2 is currently experimental. Do not run it on production systems before extensive testing
Flannel support in RKE2 is currently experimental. Do not run it on production systems without extensive testing.
:::

</TabItem>
</Tabs>

## Dual-stack configuration

IPv4/IPv6 dual-stack networking enables the allocation of both IPv4 and IPv6 addresses to Pods and Services. To configure RKE2 in dual-stack mode, in the control-plane nodes, you must set a valid IPv4/IPv6 dual-stack cidr for pods and services. To do so, use the flags `--cluster-cidr` and `--service-cidr` for example:
IPv4/IPv6 dual-stack networking enables the allocation of both IPv4 and IPv6 addresses to Pods and Services. To configure RKE2 in dual-stack mode, in the control-plane nodes, you must set a valid IPv4/IPv6 dual-stack cidr for pods and services. To do so, use the `cluster-cidr` and `service-cidr` configuration file keys:

```yaml
#/etc/rancher/rke2/config.yaml
cluster-cidr: "10.42.0.0/16,2001:cafe:42::/56"
service-cidr: "10.43.0.0/16,2001:cafe:43::/112"
```

Each CNI plugin may require a different configuration for dual-stack:
Each CNI Plugin may require a different configuration for dual-stack:

<Tabs groupId = "CNIplugin" queryString>
<TabItem value="Canal CNI plugin" default>
<TabItem value="Canal CNI Plugin" default>

Canal automatically detects the RKE2 configuration for dual-stack and does not need any extra configuration. Dual-stack is currently not supported in the windows installations of RKE2.

</TabItem>
<TabItem value="Cilium CNI plugin" default>
<TabItem value="Cilium CNI Plugin" default>

Cilium automatically detects the RKE2 configuration for dual-stack and does not need any extra configuration.

</TabItem>
<TabItem value="Calico CNI plugin" default>
<TabItem value="Calico CNI Plugin" default>

Calico automatically detects the RKE2 configuration for dual-stack and does not need any extra configuration. When deployed in dual-stack mode, it creates two different ippool resources. Note that when using dual-stack, calico leverages BGP instead of VXLAN encapsulation. Dual-stack and BGP are currently not supported in the windows installations of RKE2.
</TabItem>
<TabItem value="Flannel CNI plugin" default>
<TabItem value="Flannel CNI Plugin" default>

Flannel automatically detects the RKE2 configuration for dual-stack and does not need any extra configuration.

Expand Down
20 changes: 9 additions & 11 deletions docs/networking/multus_sriov.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ title: Multus and SR-IOV

## Using Multus

[Multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni) is a CNI plugin that enables attaching multiple network interfaces to pods. Multus does not replace CNI plugins, instead it acts as a CNI plugin multiplexer. Multus is useful in certain use cases, especially when pods are network intensive and require extra network interfaces that support dataplane acceleration techniques such as SR-IOV.
[Multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni) is a CNI Plugin that enables attaching multiple network interfaces to pods. Multus does not replace CNI Plugins, instead it acts as a CNI Plugin multiplexer. Multus is useful in certain use cases, especially when pods are network intensive and require extra network interfaces that support dataplane acceleration techniques such as SR-IOV.

Multus can not be deployed standalone. It always requires at least one conventional CNI plugin that fulfills the Kubernetes cluster network requirements. That CNI plugin becomes the default for Multus, and will be used to provide the primary interface for all pods.
Multus can not be deployed standalone. It always requires at least one conventional CNI Plugin that fulfills the Kubernetes cluster network requirements. That CNI Plugin becomes the default for Multus, and will be used to provide the primary interface for all pods.

To enable Multus, add multus as the first list entry in the cni config key, followed by the name of the plugin you want to use alongside Multus (or `none` if you will provide your own default plugin). Note that multus must always be in the first position of the list. For example, to use Multus with canal as the default plugin you could specify:
To enable Multus, specify `multus` as the first list entry in the `cni` configuration file key, followed by the name of the plugin you want to use alongside Multus (or `none` if you will provide your own default plugin). Note that multus must always be in the first position of the list. For example, to use Multus with Canal as the primary CNI Plugin:

```yaml
# /etc/rancher/rke2/config.yaml
Expand All @@ -18,8 +18,6 @@ cni:
- canal
```

This can also be specified with command-line arguments, i.e. `--cni=multus,canal` or `--cni=multus --cni=canal`.

For more information about Multus, refer to the [multus-cni](https://github.com/k8snetworkplumbingwg/multus-cni/tree/master/docs) documentation.

## Using Multus with Cilium
Expand All @@ -43,17 +41,17 @@ spec:

## Using Multus with the containernetworking plugins

Any CNI plugin can be used as secondary CNI plugin for Multus to provide additional network interfaces attached to a pod. However, it is most common to use the CNI plugins maintained by the containernetworking team (bridge, host-device, macvlan, etc) as secondary CNI plugins for Multus. These containernetworking plugins are automatically deployed when installing Multus. For more information about these plugins, refer to the [containernetworking plugins](https://www.cni.dev/plugins/current) documentation.
Any CNI Plugin can be used as secondary CNI Plugin for Multus to provide additional network interfaces attached to a pod. However, it is most common to use the CNI Plugins maintained by the Kubernetes ContainerNetworking team (bridge, host-device, macvlan, etc) as secondary CNI Plugins for Multus. The Kubernetes ContainerNetworking team plugins are automatically deployed when installing Multus. For more information about these plugins, refer to the [ContainerNetworking Plugins](https://www.cni.dev/plugins/current) documentation.

To use any of these plugins, a proper NetworkAttachmentDefinition object will need to be created to define the configuration of the secondary network. The definition is then referenced by pod annotations, which Multus will use to provide extra interfaces to that pod. An example using the macvlan cni plugin with Multus is available [in the multus-cni repo](https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/quickstart.md#storing-a-configuration-as-a-custom-resource).
To use any of these plugins, a proper NetworkAttachmentDefinition object will need to be created to define the configuration of the secondary network. The definition is then referenced by pod annotations, which Multus will use to provide extra interfaces to that pod. An example using the `macvlan` CNI Pllugin with Multus is available [in the multus-cni repo](https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/quickstart.md#storing-a-configuration-as-a-custom-resource).

## Multus IPAM plugin options

<Tabs groupId = "MultusIPAMplugins">
<TabItem value="host-local" default>
host-local IPAM plugin allocates ip addresses out of a set of address ranges. It stores the state locally on the host filesystem, therefore ensuring uniqueness of IP addresses on a single host. Therefore, we don't recommend it for multi-node clusters. This IPAM plugin does not require any extra deployment. For more information: https://www.cni.dev/plugins/current/ipam/host-local/.
</TabItem>
<TabItem value="Multus DHCP daemon" default>
<TabItem value="Multus DHCP daemon">

Multus provides an optional daemonset to deploy the DHCP daemon required to run the [DHCP IPAM plugin](https://www.cni.dev/plugins/current/ipam/dhcp/).

Expand All @@ -77,9 +75,9 @@ This feature is available starting with the 2024-01 releases (v1.29.1+rke2r1, v1

NOTE: You should write this file before starting rke2.
</TabItem>
<TabItem value="Whereabouts" default>
<TabItem value="Whereabouts">

[Whereabouts](https://github.com/k8snetworkplumbingwg/whereabouts) is an IP Address Management (IPAM) CNI plugin that assigns IP addresses cluster-wide.
[Whereabouts](https://github.com/k8snetworkplumbingwg/whereabouts) is an IP Address Management (IPAM) CNI Plugin that assigns IP addresses cluster-wide.
RKE2 includes the option to use Whereabouts with Multus to manage the IP addresses of the additional interfaces created through Multus.
In order to do this, you need to use [HelmChartConfig](../helm.md#customizing-packaged-components-with-helmchartconfig) to configure the Multus CNI to use Whereabouts.

Expand Down Expand Up @@ -114,7 +112,7 @@ that must be fulfilled to consider the node as SR-IOV capable:
* The host operating system must activate IOMMU virtualization
* The host operating system includes drivers capable of doing sriov (e.g. i40e, vfio-pci, etc)

The SR-IOV CNI plugin cannot be used as the default CNI plugin for Multus; it must be deployed alongside both Multus and a traditional CNI plugin. The SR-IOV CNI helm chart can be found in the `rancher-charts` Helm repo. For more information see [Rancher Helm Charts documentation](https://ranchermanager.docs.rancher.com/pages-for-subheaders/helm-charts-in-rancher).
The SR-IOV CNI Plugin cannot be used as the default CNI Plugin for Multus; it must be deployed alongside both Multus and a traditional CNI Plugin. The SR-IOV CNI helm chart can be found in the `rancher-charts` Helm repo. For more information see [Rancher Helm Charts documentation](https://ranchermanager.docs.rancher.com/pages-for-subheaders/helm-charts-in-rancher).

After installing the SR-IOV CNI chart, the SR-IOV operator will be deployed. Then, the user must specify what nodes in the cluster are SR-IOV capable by labeling them with `feature.node.kubernetes.io/network-sriov.capable=true`:

Expand Down
Loading