Skip to content

Single VM Machine Development Environment

Samuel Ortiz edited this page Jun 27, 2016 · 19 revisions

Ciao VM/Machine Based Development and Test Environment

Developing cluster software is complicated if you have to actually run a whole cluster on a set of physical machines. This begs for an development environment that is totally virtual. This page documents a way to set up an entire Ciao cluster inside a single virtual machine. This cluster-in-an-appliance mode is ideal for developers that desire the ability to build Ciao from sources, make changes and perform quick end to functional testing without requiring multiple machine VM's or the need to create a custom networking environment or maintaining a bevy of physical machines and a physical network.

In this case Ciao is configured in a special all in one development mode where a node has dual role (i.e launcher can be a Network Node and a Compute Node at the same time)

High Level Overview

Test Software Appliance

  • A VM running running on a NATed Virtual network with its own DHCP server (recommended) OR
  • A physical machine.

This setup can also be run on a single physical machine if desired. However this is not recommended as the ciao virtual cluster will send out DHCP requests (on behalf of the CNCIs) to the DHCP server that services the physical network. However if that is not an issue (or you are running your own DHCP server), then the physical machine will give higher performance.

Requirements on the Appliance

  • 1.5 GB of RAM (for now)
  • 32GB of Disk Space
  • VT-x and other Host CPU capabilities present on the host CPU (and exposed to the VM)
    • VM scenario: NATed Virtual Network
    • Physical machine scenario: on a physical network with a DHCP server capable of serving multiple IPs on the same network port

This NATed Virtual Network network configuration is available by default using virt-manager or virsh on most linux distributions.

Components running on the Appliance

  1. Controller 	
  2. Scheduler 	
  3. Compute+Network Node Agent (i.e. CN + NN Launcher)
  4. Workloads (Containers and VMs)
  5. WebUI
  6. Mock Openstack Services

So the Appliance is also the CN, NN, Controller, WebUI and Scheduler (and hosts other openstack services)

Overview

When the system is functioning the overall setup manifests as follows

As you can see below the CNCI VM's will end up on the same network as the appliance. The Tenant VM's are invisible to the network.


                _____________________________________________________________________________________
                |                                                                                     |
                |                                                                                     |
                |                                                                                     |
                |                                   [Tenant VMs]                         [CNCI VMs]   |
                |                                                                           ||        |
                |                                                                           ||        |
                |                                                                           ||        |
                |                                                                           ||        |
                |    [scheduler]  [controller] [keystone] [CN+NN Launcher]                  ||        |
                  __________________________________________________________________________||________|
                                                              ||                            ||
                                                              ||                            ||
           ------------------------------------------------------------------------------------------------
                                                  Host NATed Virtual Network (Or physical network)

Setup

  • Create a VM with hostname ciao-allinone running on a NATed setup (virtual network 192.168.12.0/24)
  • Ensure that VT-x and all other host CPU capabilities are exposed to the VM
virsh list --all
virsh edit <vmname>

Set the CPU mode to

<cpu mode='host-passthrough'>
</cpu>

With this setting, the CPU visible to the guest will be exactly the same as the host CPU, including e.g. virtualization capabilities. Without this you can not do nested virtualization, and this is exactly what this single VM does when launching instances on a VM.

On Fedora, nested vitualization is disabled by default. So before going any further you'll have to enable this feature:

  1. rmmod kvm_intel or rmmod kvm_amd
  2. Edit /etc/modprobe.d/kvm.conf to enable nested virtualization. /etc/modprobe.d/kvm.conf should look like this:
    ###
    ### This configuration file was provided by the qemu package.
    ### Feel free to update as needed.
    ###
    
    ###
    ### Set these options to enable nested virtualization
    ###
    
    options kvm_intel nested=1
    options kvm_amd nested=1
    
  3. modprobe kvm_intel or modprobe kvm_amd

Setup Networking

Enable host to CNCI communication. This can be done using two methods

1. hairpin mode OR
2. macvlan (recommended)

In the case of a VM appliance if bridge on which the NATed network resides supports [hairpin mode] (http://man7.org/linux/man-pages/man8/bridge.8.html) the hairpin mode can be used.

In the case of a physical appliance if the switch on which the machine connects to the network supports hairpin mode (very rare) the hair pin mode can be used.

Macvlan mode

This mode requires some re-configuration of the appliance to move the primary network interface to a macvlan interface to ensure that the CNCI's and the appliance can talk to each other without the traffic exiting the appliance.

This can be achieved for example on ubuntu 14.04 by setting up your /etc/network/interfaces as follows and rebooting the system.

Note: Please verify that you have network connectivity and the macvlan interface has an IP. Also ensure that the physical interface does not have an IP address

On Ubuntu:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface on which the maclan resides
auto eth0
iface eth0 inet static
        address 0.0.0.0
        netmask 0.0.0.0

#Use the macvlan interface to connect to the network
auto macvlan0
iface macvlan0 inet dhcp
        pre-up ip link add link eth0 name macvlan0 type macvlan mode bridge

On systemd based distributions (Fedora, RHEL, Debian, etc...)

  1. You need to create a macvlan networking device (netdev):
    # /etc/systemd/network/vmbridge.netdev
    [NetDev]
    Name=vmbridge
    Kind=macvlan
    
    [MACVLAN]
    Mode=bridge
    DHCP=no
    
  2. Configure the macvlan device's interface:
    # /etc/systemd/network/vmbridge.network
    

[Match] Name=vmbridge

[Network] IPForward=yes DHCP=yes

3. Let's put our physical interface on the macvlan network. If your physical interface is e.g. `ens3`:

/etc/systemd/network/ens3.network

[Match] Name=ens3

[Network] MACVLAN=vmbridge

4. Enable systemd.network:

systemctl enable systemd-networkd.service

5. Disable NetworkManager: `systemctl disable NetworkManager`
6. Reboot

#### Hairpin mode

Enable hairpin mode on the network port on which your VM connects to the bridge

From the host enable hairpin mode for the port on which your VM attaches to the linux bridge

sudo brctl show sudo brctl hairpin on


    Note: If you shut down your machine or VM (redo this), as virt-manager will not persist this.

### Install Go
Install the latest release of go for your distribution
[Installing Go](https://golang.org/doc/install)

### Install Docker
Install latest docker for your distribution based on the instructions from Docker
[Installing Docker](https://docs.docker.com/engine/installation/)


### Install ciao dependencies

Install the following packages which are required



      1. qemu-system-x86_64 and qemu-img, to launch the VMs and create qcow images
      2. xorriso, to create ISO images for cloudinit
      3. fuser, part of most distro's psmisc package
      4. gcc, required to build some of the ciao dependencies


On clear all of these dependencies can be satified by installing the following bundles

swupd bundle-add cloud-control go-basic os-core-dev

### Setup password less sudo

Setup passwordless sudo for the user

### Download and build the sources

Download ciao sources 

cd $GOPATH/src go get -v -u github.com/01org/ciao/... cd $GOPATH/src/github.com/01org/ciao go install -v --tags 'debug' ./...


You should see no errors.

### Verify the ciao is fully functional 

- You can now quickly verify that all aspects of Ciao including
   - VM Launch
   - Container Launch
   - Networking 

   This verification can be done using the script described below or by following the step by step instructions.


#### Automated Verification

The steps below can also be performed in a fully automated manner by using 
the developer CI test script

cd $GOPATH/src/github.com/01org/ciao/testutil/singlevm ./setup.sh


The output of the scripts will show
- Instances created
- Network traffic between containers
- ssh reach ability into VMs
- network connectivity between containers

#### Manual Verification

Alternately the steps can be performed manually

- Generate single machine certs

cd ~ mkdir local cd ~/local ciao-cert -server -role scheduler -email=[email protected] -organization=Intel -host=ciao-allinone -verify ciao-cert -role cnciagent -server-cert ./cert-Scheduler-ciao-allinone.pem -email=[email protected] -organization=Intel -host=ciao-allinone -verify ciao-cert -role controller -server-cert ./cert-Scheduler-ciao-allinone.pem -email=[email protected] -organization=Intel -host=ciao-allinone -verify ciao-cert -role agent,netagent -server-cert ./cert-Scheduler-ciao-allinone.pem -email=[email protected] -organization=Intel -host=ciao-allinone -verify openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout controller_key.pem -out controller_cert.pem


- Generate a CNCI VM based on these certs as follows

cd ~/local $GOPATH/src/github.com/01org/ciao/networking/ciao-cnci-agent/scripts/generate_cnci_cloud_image.sh -c . -i -d clear-8260-ciao-networking.img qemu-img convert -f raw -O qcow2 clear-8260-ciao-networking.img clear-8260-ciao-networking.qcow mkdir -p /var/lib/ciao/images sudo cp clear-8260-ciao-networking.qcow /var/lib/ciao/images cd /var/lib/ciao/images sudo ln -sf clear-8260-ciao-networking.qcow 4e16e743-265a-4bf2-9fd1-57ada0b28904


- Obtain the clear cloud image for the workload

LATEST=$(curl https://download.clearlinux.org/latest) curl -O https://download.clearlinux.org/image/clear-${LATEST}-cloud.img.xz xz -T0 --decompress clear-${LATEST}-cloud.img.xz ln -s clear-${LATEST}-cloud.img df3768da-31f5-4ba6-82f0-127a1a705169



- Now kick off the scheduler, launcher and controller (here the compute net is the VM's virtual network subnet, 192.168.12.0/24). 

cd $GOPATH/bin

sudo -E ./ciao-scheduler --cacert=/local/CAcert-ciao-allinone.pem --cert=/local/cert-Scheduler-ciao-allinone.pem --heartbeat --alsologtostderr -v 3

sudo ./ciao-launcher --cacert=/local/CAcert-ciao-allinone.pem --cert=/local/cert-CNAgent-NetworkingAgent-ciao-allinone.pem --network=dual --compute-net 192.168.12.0/24 --mgmt-net 192.168.12.0/24 --alsologtostderr -v 3 --disk-limit=false

sudo ./ciao-controller --cacert=/local/CAcert-ciao-allinone.pem --cert=/local/cert-Controller-ciao-allinone.pem --single --username=csr --password=hello --httpskey=./controller_key.pem --httpscert=./controller_cert.pem -v 3 -alsologtostderr


- Check the ciao-controller output/logs to determine the URI for the mock keystone

cat /var/lib/ciao/logs/controller/ciao-controller.ERROR ... E0526 11:15:58.118249 5543 main.go:123] ======================== E0526 11:15:58.118750 5543 main.go:124] Identity URL: http://127.0.0.1:44822 E0526 11:15:58.118774 5543 main.go:125] Please E0526 11:15:58.118785 5543 main.go:126] export CIAO_IDENTITY=http://127.0.0.1:44822 E0526 11:15:58.118801 5543 main.go:127] ========================

- Setup your environment variables for the ciao-cli with the single machine specific values and the identity information from the log file

export CIAO_CONTROLLER=ciao-allinone export CIAO_USERNAME=admin export CIAO_PASSWORD=giveciaoatry export CIAO_IDENTITY=http://127.0.0.1:44822


- Instances can now be launched using the ciao-cli
Clone this wiki locally