Skip to content

Commit

Permalink
Add guide on external load balancers (#152)
Browse files Browse the repository at this point in the history
* Redirect all old pages
* Add dependabot for yarn/docusarous dependencies
* Add guide on external load balancers

Signed-off-by: Derek Nola <[email protected]>
Co-Authored-By: Brad Davidson <[email protected]>
  • Loading branch information
dereknola and brandond authored Aug 1, 2023
1 parent 4f72417 commit cb98257
Show file tree
Hide file tree
Showing 19 changed files with 236 additions and 78 deletions.
7 changes: 7 additions & 0 deletions .github/workflows/dependabot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
version: 2
updates:
- package-ecosystem: "npm"
# Look for `package.json` and `lock` files in the `root` directory
directory: "/"
schedule:
interval: "weekly"
5 changes: 0 additions & 5 deletions docs/backup-restore/backup-restore.md

This file was deleted.

194 changes: 194 additions & 0 deletions docs/datastore/cluster-loadbalancer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
---
title: Cluster Load Balancer
weight: 30
---


This section describes how to install an external load balancer in front of a High Availability (HA) K3s cluster's server nodes. Two examples are provided: Nginx and HAProxy.

:::tip
External load-balancers should not be confused with the embedded ServiceLB, which is an embedded controller that allows for use of Kubernetes LoadBalancer Services without deploying a third-party load-balancer controller. For more details, see [Service Load Balancer](../networking/networking.md#service-load-balancer).

External load-balancers can be used to provide a fixed registration address for registering nodes, or for external access to the Kubernetes API Server. For exposing LoadBalancer Services, external load-balancers can be used alongside or instead of ServiceLB, but in most cases, replacement load-balancer controllers such as MetalLB or Kube-VIP are a better choice.
:::

## Prerequisites

All nodes in this example are running Ubuntu 20.04.

For both examples, assume that a [HA K3s cluster with embedded etcd](../datastore/ha-embedded.md) has been installed on 3 nodes.

Each k3s server is configured with:
```yaml
# /etc/rancher/k3s/config.yaml
token: lb-cluster-gd
tls-san: 10.10.10.100
```
The nodes have hostnames and IPs of:
* server-1: `10.10.10.50`
* server-2: `10.10.10.51`
* server-3: `10.10.10.52`


Two additional nodes for load balancing are configured with hostnames and IPs of:
* lb-1: `10.10.10.98`
* lb-2: `10.10.10.99`

Three additional nodes exist with hostnames and IPs of:
* agent-1: `10.10.10.101`
* agent-2: `10.10.10.102`
* agent-3: `10.10.10.103`

## Setup Load Balancer
<Tabs>
<TabItem value="HAProxy" default>

[HAProxy](http://www.haproxy.org/) is an open source option that provides a TCP load balancer. It also supports HA for the load balancer itself, ensuring redundancy at all levels. See [HAProxy Documentation](http://docs.haproxy.org/2.8/intro.html) for more info.

Additionally, we will use KeepAlived to generate a virtual IP (VIP) that will be used to access the cluster. See [KeepAlived Documentation](https://www.keepalived.org/manpage.html) for more info.



1) Install HAProxy and KeepAlived:

```bash
sudo apt-get install haproxy keepalived
```

2) Add the following to `/etc/haproxy/haproxy.cfg` on lb-1 and lb-2:

```
frontend k3s-frontend
bind *:6443
mode tcp
option tcplog
default_backend k3s-backend
backend k3s-backend
mode tcp
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s
server server-1 10.10.10.50:6443 check
server server-2 10.10.10.51:6443 check
server server-3 10.10.10.52:6443 check
```
3) Add the following to `/etc/keepalived/keepalived.conf` on lb-1 and lb-2:

```
vrrp_script chk_haproxy {
script 'killall -0 haproxy' # faster than pidof
interval 2
}
vrrp_instance haproxy-vip {
interface eth1
state <STATE> # MASTER on lb-1, BACKUP on lb-2
priority <PRIORITY> # 200 on lb-1, 100 on lb-2
virtual_router_id 51
virtual_ipaddress {
10.10.10.100/24
}
track_script {
chk_haproxy
}
}
```

6) Restart HAProxy and KeepAlived on lb-1 and lb-2:

```bash
systemctl restart haproxy
systemctl restart keepalived
```

5) On agent-1, agent-2, and agent-3, run the following command to install k3s and join the cluster:

```bash
curl -sfL https://get.k3s.io | K3S_TOKEN=lb-cluster-gd sh -s - agent --server https://10.10.10.100:6443
```

You can now use `kubectl` from server node to interact with the cluster.
```bash
root@server-1 $ k3s kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
agent-1 Ready <none> 32s v1.27.3+k3s1
agent-2 Ready <none> 20s v1.27.3+k3s1
agent-3 Ready <none> 9s v1.27.3+k3s1
server-1 Ready control-plane,etcd,master 4m22s v1.27.3+k3s1
server-2 Ready control-plane,etcd,master 3m58s v1.27.3+k3s1
server-3 Ready control-plane,etcd,master 3m12s v1.27.3+k3s1
```

</TabItem>

<TabItem value="Nginx">

## Nginx Load Balancer

:::warning
Nginx does not natively support a High Availability (HA) configuration. If setting up an HA cluster, having a single load balancer in front of K3s will reintroduce a single point of failure.
:::

[Nginx Open Source](http://nginx.org/) provides a TCP load balancer. See [Using nginx as HTTP load balancer](https://nginx.org/en/docs/http/load_balancing.html) for more info.

1) Create a `nginx.conf` file on lb-1 with the following contents:

```
events {}
stream {
upstream k3s_servers {
server 10.10.10.50:6443;
server 10.10.10.51:6443;
server 10.10.10.52:6443;
}
server {
listen 6443;
proxy_pass k3s_servers;
}
}
```

2) Run the Nginx load balancer on lb-1:

Using docker:

```bash
docker run -d --restart unless-stopped \
-v ${PWD}/nginx.conf:/etc/nginx/nginx.conf \
-p 6443:6443 \
nginx:stable
```

Or [install nginx](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/) and then run:

```bash
cp nginx.conf /etc/nginx/nginx.conf
systemctl start nginx
```

3) On agent-1, agent-2, and agent-3, run the following command to install k3s and join the cluster:

```bash
curl -sfL https://get.k3s.io | K3S_TOKEN=lb-cluster-gd sh -s - agent --server https://10.10.10.99:6443
```

You can now use `kubectl` from server node to interact with the cluster.
```bash
root@server1 $ k3s kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
agent-1 Ready <none> 30s v1.27.3+k3s1
agent-2 Ready <none> 22s v1.27.3+k3s1
agent-3 Ready <none> 13s v1.27.3+k3s1
server-1 Ready control-plane,etcd,master 4m49s v1.27.3+k3s1
server-2 Ready control-plane,etcd,master 3m58s v1.27.3+k3s1
server-3 Ready control-plane,etcd,master 3m16s v1.27.3+k3s1
```
</TabItem>
</Tabs>
16 changes: 12 additions & 4 deletions docs/datastore/ha-embedded.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,18 @@ An HA K3s cluster with embedded etcd is composed of:
* Optional: A **fixed registration address** for agent nodes to register with the cluster

To get started, first launch a server node with the `cluster-init` flag to enable clustering and a token that will be used as a shared secret to join additional servers to the cluster.

```bash
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server --cluster-init
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server \
--cluster-init \
--tls-san=<FIXED_IP> # Optional, needed if using a fixed registration address
```

After launching the first server, join the second and third servers to the cluster using the shared secret:
```bash
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server --server https://<ip or hostname of server1>:6443
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server \
--server https://<ip or hostname of server1>:6443 \
--tls-san=<FIXED_IP> # Optional, needed if using a fixed registration address
```

Check to see that the second and third servers are now part of the cluster:
Expand All @@ -50,9 +55,12 @@ There are a few config flags that must be the same in all server nodes:
* Feature related flags: `--secrets-encryption`

## Existing single-node clusters

:::info Version Gate
Available as of [v1.22.2+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.22.2%2Bk3s1)
:::

If you have an existing cluster using the default embedded SQLite database, you can convert it to etcd by simply restarting your K3s server with the `--cluster-init` flag. Once you've done that, you'll be able to add additional instances as described above.

If an etcd datastore is found on disk either because that node has either initialized or joined a cluster already, the datastore arguments (`--cluster-init`, `--server`, `--datastore-endpoint`, etc) are ignored.

>**Important:** K3s v1.22.2 and newer support migration from SQLite to etcd. Older versions will create a new empty datastore if you add `--cluster-init` to an existing server.
26 changes: 15 additions & 11 deletions docs/datastore/ha.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ For example, a command like the following could be used to install the K3s serve
curl -sfL https://get.k3s.io | sh -s - server \
--token=SECRET \
--datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name"
--tls-san=<FIXED_IP> # Optional, needed if using a fixed registration address
```

The datastore endpoint format differs based on the database type. For details, refer to the section on [datastore endpoint formats.](../datastore/datastore.md#datastore-endpoint-format-and-functionality)
Expand Down Expand Up @@ -73,24 +74,27 @@ There are a few config flags that must be the same in all server nodes:
Ensure that you retain a copy of this token as it is required when restoring from backup and adding nodes. Previously, K3s did not enforce the use of a token when using external SQL datastores.
:::

### 4. Optional: Join Agent Nodes

Because K3s server nodes are schedulable by default, agent nodes are not required for a HA K3s cluster. However, you may wish to have dedicated agent nodes to run your apps and services.
### 4. Optional: Configure a Fixed Registration Address

Joining agent nodes in an HA cluster is the same as joining agent nodes in a single server cluster. You just need to specify the URL the agent should register to (either one of the server IPs or a fixed registration address) and the token it should use.

```bash
K3S_TOKEN=SECRET k3s agent --server https://server-or-fixed-registration-address:6443
```

### 5. Optional: Configure a Fixed Registration Address

Agent nodes need a URL to register against. This can be the IP or hostname of any server node, but in many cases those may change over time. For example, if you are running your cluster in a cloud that supports scaling groups, you may scale the server node group up and down over time, causing nodes to be created and destroyed and thus having different IPs from the initial set of server nodes. In this case, you should have a stable endpoint in front of the server nodes that will not change over time. This endpoint can be set up using any number approaches, such as:
Agent nodes need a URL to register against. This can be the IP or hostname of any server node, but in many cases those may change over time. For example, if running your cluster in a cloud that supports scaling groups, nodes may be created and destroyed over time, changing to different IPs from the initial set of server nodes. It would be best to have a stable endpoint in front of the server nodes that will not change over time. This endpoint can be set up using any number approaches, such as:

* A layer-4 (TCP) load balancer
* Round-robin DNS
* Virtual or elastic IP addresses

See [Cluster Loadbalancer](./cluster-loadbalancer.md) for example configurations.

This endpoint can also be used for accessing the Kubernetes API. So you can, for example, modify your [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file to point to it instead of a specific node.

To avoid certificate errors in such a configuration, you should configure the server with the `--tls-san YOUR_IP_OR_HOSTNAME_HERE` option. This option adds an additional hostname or IP as a Subject Alternative Name in the TLS cert, and it can be specified multiple times if you would like to access via both the IP and the hostname.

### 5. Optional: Join Agent Nodes

Because K3s server nodes are schedulable by default, agent nodes are not required for a HA K3s cluster. However, you may wish to have dedicated agent nodes to run your apps and services.

Joining agent nodes in an HA cluster is the same as joining agent nodes in a single server cluster. You just need to specify the URL the agent should register to (either one of the server IPs or a fixed registration address) and the token it should use.

```bash
K3S_TOKEN=SECRET k3s agent --server https://server-or-fixed-registration-address:6443
```
5 changes: 0 additions & 5 deletions docs/installation/datastore.md

This file was deleted.

5 changes: 0 additions & 5 deletions docs/installation/disable-flags.md

This file was deleted.

5 changes: 0 additions & 5 deletions docs/reference/agent-config.md

This file was deleted.

5 changes: 0 additions & 5 deletions docs/reference/server-config.md

This file was deleted.

15 changes: 7 additions & 8 deletions docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -104,14 +104,13 @@ module.exports = {
'@docusaurus/plugin-client-redirects',
{
redirects: [
{
from: '/installation/ha',
to: '/datastore/ha',
},
{
from: '/installation/ha-embedded',
to: '/datastore/ha-embedded',
},
{ from: '/installation/ha', to: '/datastore/ha' },
{ from: '/installation/ha-embedded', to: '/datastore/ha-embedded' },
{ from: '/installation/datastore', to: '/datastore' },
{ from: '/installation/disable-flags', to: '/installation/server-roles' },
{ from: '/backup-restore/backup-restore', to: '/datastore/backup-restore' },
{ from: '/reference/agent-config', to: '/cli/agent' },
{ from: '/reference/server-config', to: '/cli/server' },
],
},
],
Expand Down
Empty file.

This file was deleted.

This file was deleted.

This file was deleted.

This file was deleted.

This file was deleted.

This file was deleted.

Empty file.
1 change: 1 addition & 0 deletions sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ module.exports = {
'datastore/backup-restore',
'datastore/ha-embedded',
'datastore/ha',
'datastore/cluster-loadbalancer',
],
},
{
Expand Down

0 comments on commit cb98257

Please sign in to comment.