Skip to content

Single VM Machine Development Environment

Manohar Castelino edited this page Jun 17, 2016 · 19 revisions

Ciao VM/Machine Based Development and Test Environment

Developing cluster software is complicated if you have to actually run a whole cluster on a set of physical machines. This begs for an development environment that is totally virtual. This page documents a way to set up an entire Ciao cluster inside a single virtual machine. This cluster-in-an-appliance mode is ideal for developers that desire the ability to build Ciao from sources, make changes and perform quick end to functional testing without requiring multiple machine VM's or the need to create a custom networking environment or maintaining a bevy of physical machines and a physical network.

In this case Ciao is configured in a special all in one development mode where a node has dual role (i.e launcher can be a Network Node and a Compute Node at the same time)

High Level Overview

Test Software Appliance

  • A VM running running on a NATed Virtual network with its own DHCP server (recommended) OR
  • A physical machine.

This setup can also be run on a single physical machine if desired. However this is not recommended as the ciao virtual cluster will send out DHCP requests (on behalf of the CNCIs) to the DHCP server that services the physical network. However if that is not an issue (or you are running your own DHCP server), then the physical machine will give higher performance.

Requirements on the Appliance

  • 1.5 GB of RAM (for now)
  • 32GB of Disk Space
  • VT-x and other Host CPU capabilities present on the host CPU (and exposed to the VM)
    • VM scenario: NATed Virtual Network
    • Physical machine scenario: on a physical network with a DHCP server capable of serving multiple IPs on the same network port

This NATed Virtual Network network configuration is available by default using virt-manager or virsh on most linux distributions.

Components running on the Appliance

  1. Controller
  2. Scheduler
  3. Compute+Network Node Agent (i.e. CN + NN Launcher)
  4. Workloads (Containers and VMs)
  5. WebUI
  6. Mock Openstack Services

So the Appliance is also the CN, NN, Controller, WebUI and Scheduler (and hosts other openstack services)

Overview

When the system is functioning the overall setup manifests as follows

As you can see below the CNCI VM's will end up on the same network as the appliance. The Tenant VM's are invisible to the network.


                _____________________________________________________________________________________
                |                                                                                     |
                |                                                                                     |
                |                                                                                     |
                |                                   [Tenant VMs]                         [CNCI VMs]   |
                |                                                                           ||        |
                |                                                                           ||        |
                |                                                                           ||        |
                |                                                                           ||        |
                |    [scheduler]  [controller] [keystone] [CN+NN Launcher]                  ||        |
                  __________________________________________________________________________||________|
                                                              ||                            ||
                                                              ||                            ||
           ------------------------------------------------------------------------------------------------
                                                  Host NATed Virtual Network (Or physical network)

Setup

  • Create a VM with hostname ciao-allinone running on a NATed setup (virtual network 192.168.12.0/24)
  • Ensure that VT-x and all other host CPU capabilities are exposed to the VM
virsh list --all
virsh edit <vmname>

Set the CPU mode to

<cpu mode='host-passthrough'>
</cpu>

Setup Networking

Enable host to CNCI communication. This can be done using two methods

1. hairpin mode OR
2. macvlan (recommended)

In the case of a VM appliance if bridge on which the NATed network resides supports [hairpin mode] (http://man7.org/linux/man-pages/man8/bridge.8.html) the hairpin mode can be used.

In the case of a physical appliance if the switch on which the machine connects to the network supports hairpin mode (very rare) the hair pin mode can be used.

Macvlan mode

This mode requires some re-configuration of the appliance to move the primary network interface to a macvlan interface to ensure that the CNCI's and the appliance can talk to each other without the traffic exiting the appliance.

This can be achieved for example on ubuntu 14.04 by setting up your /etc/network/interfaces as follows and rebooting the system.

Note: Please verify that you have network connectivity and the macvlan interface has an IP.

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface on which the maclan resides
auto eth0
iface eth0 inet static
        address 0.0.0.0
        netmask 0.0.0.0

#Use the macvlan interface to connect to the network
auto macvlan0
iface macvlan0 inet dhcp
        pre-up ip link add link eth0 name macvlan0 type macvlan mode bridge

Hairpin mode

Enable hairpin mode on the network port on which your VM connects to the bridge

From the host enable hairpin mode for the port on which your VM attaches to the linux bridge

sudo brctl show
sudo brctl hairpin <bridge> <port> on
Note: If you shut down your machine or VM (redo this), as virt-manager will not persist this.

Install Go

Install the latest release of go for your distribution Installing Go

Install Docker

Install latest docker for your distribution based on the instructions from Docker Installing Docker

Install ciao dependencies

Install the following packages which are required 1. qemu-system-x86_64 and qemu-img, to launch the VMs and create qcow images 2. xorriso, to create ISO images for cloudinit 3. fuser, part of most distro's psmisc package 4. gcc, required to build some of the ciao dependencies

On clear all of these dependencies can be satified by installing the following bundles

swupd bundle-add cloud-control go-basic os-core-dev

Setup password less sudo

Setup passwordless sudo for the user

Download and build the sources

Download ciao sources

cd $GOPATH/src
go get -v -u github.com/01org/ciao/...
cd $GOPATH/src/github.com/01org/ciao
go install -v --tags 'debug' ./...

You should see no errors.

Verify the ciao is fully functional

  • You can now quickly verify that all aspects of Ciao including

    • VM Launch
    • Container Launch
    • Networking

    This verification can be done using the script described below or by following the step by step instructions.

Automated Verification

The steps below can also be performed in a fully automated manner by using the developer CI test script

cd $GOPATH/src/github.com/01org/ciao/testutil/singlevm
./setup.sh

The output of the scripts will show

  • Instances created
  • Network traffic between containers
  • ssh reach ability into VMs
  • network connectivity between containers

Manual Verification

Alternately the steps can be performed manually

  • Generate single machine certs
cd ~
mkdir local
cd ~/local
ciao-cert -server -role scheduler [email protected] -organization=Intel -host=ciao-allinone -verify 
ciao-cert -role cnciagent -server-cert ./cert-Scheduler-ciao-allinone.pem [email protected] -organization=Intel -host=ciao-allinone -verify 
ciao-cert -role controller -server-cert ./cert-Scheduler-ciao-allinone.pem [email protected] -organization=Intel -host=ciao-allinone -verify 
ciao-cert -role agent,netagent -server-cert ./cert-Scheduler-ciao-allinone.pem [email protected] -organization=Intel -host=ciao-allinone -verify
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout controller_key.pem -out controller_cert.pem

  • Generate a CNCI VM based on these certs as follows
cd ~/local
$GOPATH/src/github.com/01org/ciao/networking/ciao-cnci-agent/scripts/generate_cnci_cloud_image.sh -c . -i -d clear-8260-ciao-networking.img
qemu-img convert -f raw -O qcow2 clear-8260-ciao-networking.img clear-8260-ciao-networking.qcow
mkdir -p /var/lib/ciao/images
sudo cp clear-8260-ciao-networking.qcow /var/lib/ciao/images
cd /var/lib/ciao/images
sudo ln -sf clear-8260-ciao-networking.qcow 4e16e743-265a-4bf2-9fd1-57ada0b28904
  • Obtain the clear cloud image for the workload
LATEST=$(curl https://download.clearlinux.org/latest)
curl -O https://download.clearlinux.org/image/clear-${LATEST}-cloud.img.xz
xz -T0 --decompress clear-${LATEST}-cloud.img.xz
ln -s clear-${LATEST}-cloud.img df3768da-31f5-4ba6-82f0-127a1a705169
  • Now kick off the scheduler, launcher and controller (here the compute net is the VM's virtual network subnet, 192.168.12.0/24).
cd $GOPATH/src

sudo -E ./ciao-scheduler --cacert=~/local/CAcert-ciao-allinone.pem --cert=~/local/cert-Scheduler-ciao-allinone.pem --heartbeat --alsologtostderr -v 3

sudo ./ciao-launcher --cacert=~/local/CAcert-ciao-allinone.pem --cert=~/local/cert-CNAgent-NetworkingAgent-ciao-allinone.pem --network=dual --compute-net 192.168.12.0/24 --mgmt-net 192.168.12.0/24 --alsologtostderr -v 3 --disk-limit=false

sudo ./ciao-controller --cacert=~/local/CAcert-ciao-allinone.pem --cert=~/local/cert-Controller-ciao-allinone.pem --single --username=csr --password=hello --httpskey=./controller_key.pem --httpscert=./controller_cert.pem -v 3 -alsologtostderr

  • Check the ciao-controller output/logs to determine the URI for the mock keystone
cat /var/lib/ciao/logs/controller/ciao-controller.ERROR
...
E0526 11:15:58.118249    5543 main.go:123] ========================
E0526 11:15:58.118750    5543 main.go:124] Identity URL: http://127.0.0.1:44822
E0526 11:15:58.118774    5543 main.go:125] Please
E0526 11:15:58.118785    5543 main.go:126] export CIAO_IDENTITY=http://127.0.0.1:44822
E0526 11:15:58.118801    5543 main.go:127] ========================

  • Setup your environment variables for the ciao-cli with the single machine specific values and the identity information from the log file
export CIAO_CONTROLLER=ciao-allinone
export CIAO_USERNAME=admin
export CIAO_PASSWORD=giveciaoatry
export CIAO_IDENTITY=http://127.0.0.1:44822
  • Instances can now be launched using the ciao-cli