Skip to content

This repository contains infrastructure Layout components of the Reference System Software

License

Notifications You must be signed in to change notification settings

Woljtek/infrastructure

 
 

Repository files navigation

⤴️ Go back to the Reference System Sotfware repository ⤴️

How To

Overview

Integrators' machine is called BASTION in the rest of the installation manual

Bastion requirements

Infrastructure requirements

  • A domain name publicly available.
    Replace all occurences of DOMAIN_NAME in the repo by your own domain name.

  • A load balancer listening on a public IP address.
    Configure the load balancer to forward incoming flow toward the cluster masters.

    Load balancer port masters port
    80 32080
    443 32443

Quickstart

## ON BASTION

# get the infrastructure repository
git clone https://github.com/COPRS/infrastructure.git

cd infrastructure

# install requirements
git submodule update --init
python3 -m pip install --user -r collections/kubespray/requirements.txt
ansible-galaxy collection install \
    kubernetes.core \
    openstack.cloud

# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster

# Review and change paramters under ``inventory/mycluster/group_vars`` or ``inventory/mycluster/host_vars``
cat inventory/mycluster/host_vars/localhost/cluster.yaml
cat inventory/mycluster/host_vars/localhost/image.yaml
cat inventory/mycluster/group_vars/all/kubespray.yaml
cat inventory/mycluster/group_vars/bastion/apps.yaml

# If needed create an image for the machines with Packer
ansible-playbook image.yaml \
    -i inventory/mycluster/hosts.ini

# Deploy machines with safescale
ansible-playbook cluster-setup.yaml \
    -i inventory/mycluster/hosts.ini

# Install security services
ansible-playbook security.yaml \
    -i inventory/mycluster/hosts.ini \
    --become

# Deploy kubernetes with Kubespray - run the playbook as root
# The option `--become` is required, for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook collections/kubespray/cluster.yml \
    -i inventory/mycluster/hosts.ini \
    --become

# Enable pod security policies on the cluster
# /!\ you first need to create the psp and crb resources
# before enabling the admission plugin
ansible-playbook collections/kubespray/upgrade-cluster.yml \
    -i inventory/mycluster/hosts.ini \
    --tags cluster-roles \
    -e podsecuritypolicy_enabled=true \
    --become

ansible-playbook collections/kubespray/upgrade-cluster.yml \
    -i inventory/mycluster/hosts.ini \
    --tags master \
    -e podsecuritypolicy_enabled=true \
    --become

# Prepare the cluster for Reference System
ansible-playbook rs-setup.yaml \
    -i inventory/mycluster/hosts.ini

# deploy apps
ansible-playbook apps.yaml \
    -i inventory/mycluster/hosts.ini

# Install graylog content packs (Optionnal)
ansible-playbook configure-graylog.yaml \
    -i inventory/mycluster/hosts.ini

TLS configuration

Reference System exploits APISIX Ingress Controller and Cert Manager for TLS configuration.

You need to create an issuer and a certificate for your domain name with Cert Manager.

APISIX does not work with Cert Manager for ACME HTTP01 challenges (#781).
You must use the DNS01 challenge to generate a Let's encrypt certificate. The configuration is detailled on Cert Manager documentation.

Dependencies

This project exploits Kubespray to deploy Kubernetes.
The fully detailed documentation and configuration options are available on its page: https://kubespray.io/

Tree view

The repository is made of the following main directories.

  • apps
  • doc
  • platform

Apps

This folder gathers the configuration of the applications deployed on the platform.
Each application has its own folder inside apps with the values of the Helm chart, the kustomization files, the patches related, and any additional kubernetes resources.
The application's directory can be split by environment with subfolders like dev, prod, etc.

Doc

Here we find all the documentation describing the infrastructure deployment and maintenance operations.

Platform

This directory concentrates what is required to deploy the infrastructure with Ansible.

  • collections/kubespray: folder where kubespray is integrated into the project as a git submodule.
    • cluster.yaml the Ansible playbook to run to deploy Kubernetes.
  • inventory:
    • sample: Ansible inventory for sample configuration.
      • group_vars:
        • all/kubespray.yaml: kubespray configuration.
        • bastion/app_installer.yaml: application installer configuration: helm version, repositories, applications directory paths
      • host_vars/localhost: safescale cluster configuration and image configuration.
      • hosts.ini: host inventory.
  • playbooks: list of Ansible playbooks to run to deploy the platform.
    • clean.yaml: remove the generated files by the different playbooks, delete the cluster and remove the volumes.
    • cluster-setup.yaml: deploy the network, the machines and the volumes with safescale.
    • image.yaml: build the image used to create the machines.
    • rs-setup.yaml: prepare the necessary resources for the platform.
    • apps.yaml: deploy the applications.
    • security.yaml: deploy the security services.
  • roles: list of roles used to deploy the cluster.
    • security: roles describing the installation of the different security tools.
  • ansible.cfg: Ansible configuration file. It includes the ssh configuration to allow Ansible to access the machines through the gateway.

Playbooks

name tags utility
cluster-setup.yaml none
cluster_create
hosts_update
volumes_create
all tags below are executed
create safescale cluster
update hosts.ini with newly created machines, fill .ssh folder with machines ssh public keys, generate ansible ssh config, update config.cfg
attach disks to kubernetes nodes
delete.yaml
⚠️ this playbook has been developed with the only purpose of testing the project not for production usage
none
cleanup_generated
detach_volumes
delete_volumes
delete_cluster
nothing
remove ssh keys, added hosts in hosts.ini, ssh config file
detach added disks from k8s nodes
delete added disks from k8s nodes
delete safescale cluster
image.yaml none make reference system golden image for k8s nodes
rs-setup.yaml none
gateway
apps
all tags below are executed
install tools on gateways
configure the cluster
apps.yaml none deploy applications (adding -e name=APP_NAME deploy only the app matching APP_NAME)
security.yaml none
auditd
wazuh
clamav
openvpn
suricata
uninstall_APP_NAME
install all security tools
install auditd
install wazuh
install clamav
install openvpn
install suricata
uninstall the app matching APP_NAME

Apps

Configurations proposed by default :

About

This repository contains infrastructure Layout components of the Reference System Software

Resources

License

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Ruby 42.4%
  • Shell 41.2%
  • Jinja 16.4%