Skip to content

Commit

Permalink
Initial Commit
Browse files Browse the repository at this point in the history
  • Loading branch information
Jaskaranbir committed Sep 30, 2018
0 parents commit 6fd7afb
Show file tree
Hide file tree
Showing 70 changed files with 2,403 additions and 0 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
.history
.vagrant
.vscode
13 changes: 13 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
Version 2, December 2004

Copyright (C) 2004 Sam Hocevar <[email protected]>

Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.

DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION

0. You just DO WHAT THE FUCK YOU WANT TO.
54 changes: 54 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
## High-Availability Multi-Node Kubernetes Cluster
---

A **completely Dockerized** multi-node Kubernetes highly-available cluster provisioned using Vagrant/Ansible, based on Kubernetes version **1.12** (still not enough fancy words for a day :smiley:)

**Note**: This is not a production-ready setup. Instead, this is intended to be a base/idea for one (if looking for custom setups, otherwise [Kubeadm][0] does job pretty well).

### How Stuff Works

#### Kubernetes

* The setup uses multi-master and multi-worker setup (and multi-etcd, of course).

* On the master-node side, everything is ordinary, as you would expect from any regular Kubernetes master.

* On the worker-node side, the master-nodes are loadbalanced using HAProxy. So the Kubelet connects to HAProxy's address instead of a specific master.

* Yes, HAProxy runs on each of the worker-nodes instead of master. This is because if the master goes down, it also takes down loadbalancer with it (not an ideal scenario).

* CNI: [Weave Net][1]

* DNS: [Core DNS][2]

#### Vagrant

* Vagrant is simply a convenient way of automatically spinning up a cluster. You can easily configure the instances in `Vagrantfile`.

* Uses Virtualbox.

* Default instance-count:
```
ETCD: 1
Kube-Master: 1
Kube-Worker: 2
```

* The setup is based on a custom packed **CoreOS** based Vagrant-image. Image-source: [Jaskaranbir/packer_coreos-ansible-python][3]

* Just run `vagarnt up`, and it will automatically run install/run Ansible and setup a local Kubernetes cluster.

TODO: Improve cluster-security.
Suggestions welcomed.

### Ansible Notes

* When adding/removing instances, be sure to also update the Ansible [inventory][4].

* Ansible copies its templates for manifests/configs to `/etc/kubernetes`, which will contain all Kubernetes resources, including certificates.

[0]: https://kubernetes.io/docs/setup/independent/install-kubeadm/
[1]: https://www.weave.works/oss/net/
[2]: https://coredns.io/
[3]: https://github.com/Jaskaranbir/packer_coreos-ansible-python
[4]: https://github.com/Jaskaranbir/k8s_ha_multinode/blob/master/inventory/hosts.yml
235 changes: 235 additions & 0 deletions Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,235 @@
# -*- mode: ruby -*-
# # vi: set ft=ruby :

require 'fileutils'

Vagrant.require_version ">= 1.6.0"

# CoreOS doesn't support vboxsf annd guest-additions for virtualbox
# So we need to use NFS, and Vagrant NFS doesn't work without this
plugin_dependencies = [
"vagrant-winnfsd",
"vagrant-hostmanager"
]

needsRestart = false

# Install plugins if required
plugin_dependencies.each do |plugin_name|
unless Vagrant.has_plugin? plugin_name
system("vagrant plugin install #{plugin_name}")
needsRestart = true
puts "#{plugin_name} installed"
end
end

# Restart vagrant if new plugins were installed
if needsRestart === true
exec "vagrant #{ARGV.join(' ')}"
end

# Use old vb_xxx config variables when set
def vm_gui
$vb_gui.nil? ? $vm_gui : $vb_gui
end

def vm_memory
$vb_memory.nil? ? $vm_memory : $vb_memory
end

def vm_cpus
$vb_cpus.nil? ? $vm_cpus : $vb_cpus
end

$vm_configs = [
# Defaults for config options
etcd_config: {
num_instances: 1,
instance_name_prefix: "etcd",
enable_serial_logging: false,

vm_gui: false,
vm_memory: 512,
vm_cpus: 1,
vb_cpuexecutioncap: 80,

user_home_path: "/home/core",
forwarded_ports: [],
shared_folders: [
{
host_path: "./",
guest_path: "/vagrant"
}
]
},

kube_master_config: {
num_instances: 1,
instance_name_prefix: "kube-master",
enable_serial_logging: false,

vm_gui: false,
vm_memory: 2048,
vm_cpus: 2,
vb_cpuexecutioncap: 80,

user_home_path: "/home/core",
forwarded_ports: [],
shared_folders: [
{
host_path: "./",
guest_path: "/vagrant"
}
]
},

kube_worker_config: {
num_instances: 2,
instance_name_prefix: "kube-worker",
enable_serial_logging: false,

vm_gui: false,
vm_memory: 1024,
vm_cpus: 2,
vb_cpuexecutioncap: 80,

user_home_path: "/home/core",
forwarded_ports: [],
shared_folders: [
{
host_path: "./",
guest_path: "/vagrant"
}
]
}
]

Vagrant.configure("2") do |config|
# always use Vagrants insecure key
config.ssh.insert_key = true
# forward ssh agent to easily ssh into the different machines
config.ssh.forward_agent = false

# Hostmanager
config.hostmanager.enabled = true
config.hostmanager.manage_guest = true
config.hostmanager.ignore_private_ip = false

config.vm.box = "jaskaranbir/coreos-ansible"
config.vm.boot_timeout = 500

config.vm.provider :virtualbox do |vbox|
# On VirtualBox, we don't have guest additions or a functional vboxsf
# in CoreOS, so tell Vagrant that so it can be smarter.
vbox.check_guest_additions = false
vbox.functional_vboxsf = false
end

# plugin conflict
if Vagrant.has_plugin?("vagrant-vbguest") then
config.vbguest.auto_update = false
end

# This keeps track of total number of instances in all VMs
# It is dynamically incremented as the VM configs are iterated
vm_num_instances_offset = 0

# We need to know total number of instances so we run ansible
# only once, at last instance.
total_instances_count = 0
$vm_configs.each do | vm_config |
vm_config.each do |_, vc|
total_instances_count += vc[:num_instances]
end
end

# ================= VM-specific Configurations =================

$vm_configs.each do |vm_config|
vm_config.each do |vm_config_name, vc|
(1..vc[:num_instances]).each do |i|
config.vm.define vm_name = "%s-%02d" % [vc[:instance_name_prefix], i] do |config|
vm_num_instances_offset += 1
config.vm.hostname = vm_name

# Serial Logging
if vc[:enable_serial_logging]
logdir = File.join(File.dirname(__FILE__), "log")
FileUtils.mkdir_p(logdir)

serialFile = File.join(logdir, "%s-%s-serial.txt" % [vm_name, vc[:instance_name_prefix]])
FileUtils.touch(serialFile)

config.vm.provider :virtualbox do |vb, override|
vb.customize ["modifyvm", :id, "--uart1", "0x3F8", "4"]
vb.customize ["modifyvm", :id, "--uartmode1", serialFile]
end
end

# VM hardware resources configurations
config.vm.provider :virtualbox do |vb|
vb.gui = vc[:vm_gui]
vb.memory = vc[:vm_memory]
vb.cpus = vc[:vm_cpus]
vb.customize [
"modifyvm", :id,
"--cpuexecutioncap", "#{vc[:vb_cpuexecutioncap]}"
]
end

ip = "172.17.8.#{vm_num_instances_offset + 100}"
config.vm.network :private_network, ip: ip, auto_correct: true

# Port Forwarding
vc[:forwarded_ports].each do |port|
config.vm.network :forwarded_port,
host: port[:host_port],
guest: port[:guest_port],
auto_correct: true
end

# # Shared folders
vc[:shared_folders].each_with_index do |share, i|
config.vm.synced_folder share[:host_path], share[:guest_path],
id: "core-share%02d" % vm_num_instances_offset,
nfs: true,
mount_options: ['nolock,vers=3,udp']
end

# Automatically set current-dir to /vagrant on vagrant ssh
config.vm.provision :shell,
inline: "echo 'cd /vagrant' >> #{vc[:user_home_path]}/.bashrc"

# Ansible 2.6+ works only when SSH key is protected.
# So we manually copy the SSH key and set its permissions.
config.vm.provision :shell,
privileged: true, inline: <<-EOF
mkdir -p "#{vc[:user_home_path]}/.ssh"
cp "/vagrant/.vagrant/machines/#{vm_name}/virtualbox/private_key" "#{vc[:user_home_path]}/.ssh/id_rsa"
chmod 0400 "#{vc[:user_home_path]}/.ssh/id_rsa"
EOF

# Run Ansible provisioning when its last instance, so its only run once
if vm_num_instances_offset === total_instances_count
# Copy ansible directory to enable provisioning
config.vm.provision :shell,
inline: "mkdir -p -m777 /ansible",
privileged: true
config.vm.provision "file", source: "./", destination: "/ansible"
# File-provisioner needs full permissions to copy files,
# but ansible 2.6+ will not work unless parent dir is write-protected.
config.vm.provision :shell,
inline: "chmod 744 /ansible",
privileged: true

config.vm.provision :shell,
inline: "cd /ansible" \
" && /opt/bin/active_python/bin/ansible-playbook" \
" kubernetes.yml -vv",
privileged: true
end
end
end
end
end
end
22 changes: 22 additions & 0 deletions ansible.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
[defaults]
ansible_managed = Please do not modify this file directly as it is managed by Ansible and could be overwritten.
host_key_checking = false
inventory = ./inventory/hosts.yml
remote_user = core
retry_files_enabled = false
timeout = 30

[colors]
changed = yellow
debug = dark gray
deprecate = purple
diff_add = green
diff_lines = cyan
diff_remove = red
error = red
highlight = white
ok = green
skip = cyan
unreachable = red
verbose = blue
warn = bright purple
47 changes: 47 additions & 0 deletions group_vars/all.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Any generated resources should go into these directories
out_dir: "{{ playbook_dir }}/out"
host_out_dir: "{{ out_dir }}/{{ ansible_host }}"
ca_base_resources_dir: "{{ out_dir }}/ca"

# Repositories
hyperkube_image_repo: "gcr.io/google-containers/hyperkube"
kubernetes_version: "v1.12.0"

# Newly downloaded binaries will be installed in this directory
binary_copy_path: "/opt/bin"
docker_compose_path: "/opt/bin/docker-compose"
docker_path: "/usr/bin"
base_kube_dir: "/etc/kubernetes"

# Config files
kube_config_dir: "{{ base_kube_dir }}/configs"

# Resources manifest-files
manifest_dir: "{{ base_kube_dir }}/manifests"
compose_manifest_dir: "{{ manifest_dir }}/compose"
kube_manifest_dir: "{{ manifest_dir }}/kube"

kube_pki_dir: "{{ base_kube_dir }}/pki"

# Systemd service files location
systemd_service_dir: "/etc/systemd/system"
kube_psp_dir: "{{ kube_addons_dir }}/pod-security-policies"

# Local node-port where HAProxy is running
# as reverse-proxy to the apiserver
haproxy_port: 6443
# The port on master-node where APIServer is running
apiserver_port: 443

dns_service_ip: "10.3.0.10"
dns_domain: "cluster.local"
service_ip_range: "10.3.0.0/24"
kube_service_network : "10.3.0.1"
kube_pod_network: "10.2.0.0/16"
# APIServer used for initiating setup.
# Initially, we need a static apiserver address.
# This static address is replaced by a HAProxy reverse-proxy
# once the addons are setup as required.
first_master_host: "{{ groups['kubernetes-masters'][0] }}"
init_apiserver_address: |
"https://{{ hostvars[first_master_host]['ansible_env']['COREOS_PUBLIC_IPV4'] }}:{{ apiserver_port }}"
Loading

0 comments on commit 6fd7afb

Please sign in to comment.