Skip to content

Commit

Permalink
Merge pull request #7 from kthcloud/install-k8s-cluster-new
Browse files Browse the repository at this point in the history
Add KubeVirt blog and improve guides for installing K8s cluster and new host provisioning guide
  • Loading branch information
saffronjam authored Apr 14, 2024
2 parents 225426a + e695fc5 commit d8c9684
Show file tree
Hide file tree
Showing 12 changed files with 510 additions and 280 deletions.
40 changes: 40 additions & 0 deletions hugo/content/News/2024-04-14.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---
title: "From CloudStack to KubeVirt: The Journey to a New Cloud"
---

# From CloudStack to KubeVirt: The Journey to a New Cloud
**Emil Karlsson, 2024-04-14**

## Introduction
Ever since the inception of kthcloud, we have been using CloudStack as our cloud management platform. While CloudStack has served us well, it has also had its limitations. As we have grown, we have found that CloudStack might not be the best fit for our needs due to its complexity and lack of flexibility. Late last year, we started looking into alternatives and decided to move to Kubernetes, with KubeVirt as the virtualization layer. This blog post will take you through our journey from CloudStack to KubeVirt.

## Why KubeVirt?
KubeVirt is a virtualization add-on for Kubernetes that allows you to run virtual machines alongside containers in the same cluster. This means that we can run our VMs and containers on the same platform, simplifying our infrastructure and making it easier to manage.

<img src="../../images/blog/kubevirt_overview.png" alt="kubevirt overview" /><br/>

## Our solution
When moving away from CloudStack it was imperative that we found a solution that was easy to manage and maintain. Since KubeVirt is built on top of Kubernetes, it meant we could harness any platform that support Kubernetes. We decided to use Rancher to manage our Kubernetes clusters, as it provides a user-friendly interface and simplifies the management of our clusters. While Rancher offer Kubernets cluster creation tools, the last crux involved how Rancher itself would be managed. We decided to use K3s and called the cluster the `sys-cluster`. Any cluster that is then created using Rancher is called a `deploy-cluster`. The sys-cluster is only managed in our main zone `se-flem`, while deploy-clusters are created in all zones.

## Progress so far
We have been working on this project for a few months now and have made significant progress. We have set up a new sys-cluster in the `se-flem` zone using K3s and Rancher that will be used to manage our deploy-clusters. We have also set up a deploy-cluster in the `se-flem` zone, which is currently running KubeVirt. The `se-kista` zone will be set up once we have completed the migration of all hosts in the `se-flem` zone (apart from the management server for CloudStack, which will be migrated last).

| Zone | Host | Status | Note |
|------|--------------|--------------|---------|
| se-flem | se-flem-001 | Pending | Management server for CloudStack, will be migrated last |
| se-flem | se-flem-002 | Migrated | Control-node for deploy-cluster |
| se-flem | se-flem-003 | Migrated | Control-node and worker-node for sys-cluster |
| se-flem | se-flem-006 | Pending | |
| se-flem | se-flem-013 | Migrated | Worker-node for deploy-cluster |
| se-flem | se-flem-014 | Pending | |
| se-flem | se-flem-015 | Pending | |
| se-flem | se-flem-016 | Pending | |
| se-flem | se-flem-017 | Pending | |
| se-flem | se-flem-018 | Pending | |
| se-flem | se-flem-019 | Pending | |
| se-kista | t01n05 | Pending | Awaiting migration of all hosts in `se-flem` zone |
| se-kista | t01n14 | Pending | Awaiting migration of all hosts in `se-flem` zone |
| se-kista | t01n15 | Pending | Awaiting migration of all hosts in `se-flem` zone |
| se-kista | t01n16 | Pending | Awaiting migration of all hosts in `se-flem` zone |
| se-kista | t01n22 | Pending | Awaiting migration of all hosts in `se-flem` zone |
| se-kista | t01n23 | Pending | Awaiting migration of all hosts in `se-flem` zone |
26 changes: 26 additions & 0 deletions hugo/content/administration/configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Configuration
This page describes the configuration of the cloud.

## Overview
The cloud is divided into zones that logically group hardware. Each zone is in turn divided into two main clusters; `sys-cluster` and `deploy-cluster`. The `sys-cluster` is used to manage several `deploy-clusters` and only exists in one zone, `se-flem`, while the `deploy-cluster` is used to deploy applications and VMs and exists in all zones.

The sys-cluster uses K3s and hosts Rancher. Rancher is then used to manage any system servies such as `console` and `go-deploy`. A deploy-cluster is set up using Rancher. A deploy cluster may be extended with `KubeVirt` to deploy VMs.


### se-flem
The `se-flem` zone is located in Flemingsberg and is the main cluster of the cloud and, as such, hosts the sys-cluster. This zone also hosts a deploy-cluster with `KubeVirt` enabled.

### se-kista
The `se-kista` zone is located in Kista and is a secondary cluster of the cloud. It is not yet set up with the new system, see [the blog post](News/2024-04-14) for more information.

### IP Setup
| Zone | Host CIDR | IPMI CIDR |
|------|--------------|--------------|
| se-flem | 172.31.0.0/16 | 10.17.5.0/24 |
| se-kista | 172.30.0.0/16 | Fill in! |

### IPMI
| Vendor | Username | Password |
|--------|----------|----------|
| Dell | root | calvin |
| Supermicro | ADMIN | ADMIN |
2 changes: 1 addition & 1 deletion hugo/content/archive/configureGpuPassthrough.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Configure GPU Passthrough
# Configure GPU Passthrough (Archived 2024-04-14)
*DO NOT USE THIS - ONLY FOR REFERENCE - ALREADY AUTOMATED IN HOST SETUP SCRIPTS*

This guide is aimed mainly for GPU without vGPU-support, as a
Expand Down
2 changes: 1 addition & 1 deletion hugo/content/archive/configureHost.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Configure a Host
# Configure a Host (Archived 2024-04-14)
*DO NOT USE THIS - ONLY FOR REFERENCE - ALREADY AUTOMATED WITH MAAS*

Host device minimum recommended hardware requirements
Expand Down
93 changes: 93 additions & 0 deletions hugo/content/archive/hostProvisioning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# Host Provisioning (Archived 2024-04-14)

**THIS IS AN OLD GUIDE. PLEASE REFER TO THE NEW GUIDE [HERE](hostProvisioning.md)**

The workflow to provision a new host is mostly automated using
PXE-booting using [MaaS](https://maas.io/) together with [cloud
init](https://cloudinit.readthedocs.io/en/latest/). But there are some
steps that are manual.

## Installation

This guide will go through the entire workflow to setup up a brand new
host. Keep in mind that credentials to each subsystem used in the guide
are assumed to be available.

- Prerequisites

<!-- end list -->

- Access to the GitHub Admin repository
- Name and FQDN
- Password
- Static IP-address
- MaaS Zone (Check the available zones in
[MaaS](https://maas.cloud.cbh.kth.se))
- CloudStack Zone (Check the available zones in
[CloudStack](https://dashboard.cloud.cbh.kth.se), this should match
with the MaaS Zone)
- CloudStack Pod (Check the available pods in
[CloudStack](https://dashboard.cloud.cbh.kth.se))
- CloudStack Cluster (Clusters are hardware homogeneous, create a new
one if there isn't a match in
[CloudStack](https://dashboard.cloud.cbh.kth.se/))

### Steps

1. Configure BIOS and find MAC-address
1. Turn on the machine and enter BIOS
2. Go to the network cards in the BIOS to find the network card
that is used.
3. Note the MAC-address (take a photo\!)
4. Go to the Boot-order
5. Set the connected network card to be first in the list
6. Turn off the machine
2. Generate a cloud-init file
1. Go to the admin GitHub repository
2. Go to cloud-init folder
3. Run `generate.py` and follow the instructions in the terminal
3. Register the machine in MaaS
1. Go to [MAAS](https://maas.cloud.cbh.kth.se)
2. Go to Machines | Add hadware | Machine
3. Enter *Machine name* and *Zone*
4. Enter *MAC address* from BIOS
5. Select *Power type* to Manual (this will be edited in the
future)
6. Click *Save machine*
7. Refresh the page and ensure that the machine is under the
category *New*
8. (The following steps are necessary until IPMI is fixed)
9. Go to the machine in MaaS
10. Click 'Take action' and 'Abort' the commissioning
11. Click 'Take action' and 'Commission' again, **but with "Skip
configuring supported BMC controllers..."** ticked
12. Click 'Start commissioning for machine'
4. Commission the machine
1. Turn on the machine
2. Wait for it to boot and ensure that it picks up the boot image
from MaaS
3. Wait for the machine to turn it self off
5. Deploy the machine
1. Ensure the machine is turned off
2. Go to the machine in MaaS
3. Go to the 'Network Tab'
4. Tick the connected network card and Click 'Create bridge'
5. Enter 'cloudbr0' in 'Bridge name'
6. Select 'Subnet' for the zone, eg. 172.31.0.0/16
7. Select 'Static assign' in 'IP mode'
8. Enter the statis IP address of the host in 'IP address'
9. Click 'Save interface'
10. Click 'Take action' and 'Deploy'
11. Select the 'Release' and tick 'Cloud-int user-data'
12. Upload or paste the generated cloud-init file
13. Click 'Start deployment for machine'
14. Turn on the machine and wait for MaaS to finish the deployment
15. Refresh and wait for machine to be under the category *Deployed*
6. Verify installation
1. Go to the [dashboard](https://dashboard.cloud.cbh.kth.se)
2. Go to Infrastructure | Hosts
3. Verify that the new machine is present. NOTE\! It might take
some time for it to appear in CloudStack as the machine will
reboot a couple of times more before it is completely ready
4. Go to the [status page](https://cloud.cbh.kth.se/status)
5. Ensure the host status is visible under *Server statistics*
5 changes: 0 additions & 5 deletions hugo/content/go-deploy/_index.md

This file was deleted.

207 changes: 0 additions & 207 deletions hugo/content/go-deploy/prepareKubernetesCluster.md

This file was deleted.

Binary file added hugo/content/images/blog/kubevirt_overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit d8c9684

Please sign in to comment.