Skip to content
This repository has been archived by the owner on Jun 6, 2024. It is now read-only.

How to run CAP cached VM

Mark Yen edited this page Apr 16, 2018 · 12 revisions

Download VM image

# Save this file to somewhere libvirt can access it
wget https://s3.amazonaws.com/cf-opensusefs2/vagrant/cap.scf-opensuse-2.8.0.cf1.15.0.0.g5aa8b036.console-1.1.0.qcow2

Import and Start VM

Usually nice to create a disk clone to start from so you don't actually modify the VM image that's been provided.

# An example of using the images storage directory in a home directory
# to create a disk image that's based off of the image you download.
# 
# Providing the absolute path to the original file is required.
qemu-img create -b ~/images/cap.scf-opensuse-2.8.0.cf1.15.0.0.g5aa8b036.console-1.1.0.qcow2 -f qcow2 ~/images/scf-ephemeral.qcow2

# Ensure the network has been started:
virsh net-start default

# Start the instance
# Adjust the amount of RAM depending on your host system. 10G is the minimum.
virt-install --connect=qemu:///system --name=scf --ram=$((16*1024)) --vcpus=2 --disk path=~/images/scf-ephemeral.qcow2,format=qcow2 --import

Logging in

NAT Mode

sudo virsh domifaddr <name-of-vm>

If the domifaddr command isn't available, an alternate way to find the IP is to get the MAC address, then look in the DHCP lease information:

MAC=$(virsh dumpxml scf | grep "mac address" |sed "s/.*'\(.*\)'.*/\1/g")
grep ${MAC} /var/lib/libvirt/dnsmasq/default.leases

Bridged Mode

Bridged mode is not recommended due to the complexity of setting up port forwarding for the various components. If you're using bridged mode, libvirt won't know the VM IP. You'll have to log in to the VM via a console to find the IP with ip -4 -o a instead.

Connecting via SSH

ssh scf@<ip-address>

Credentials are:

username: scf
password: changeme

Creating storage class

echo '{"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"persistent"},"provisioner":"kubernetes.io/host-path"}' | kubectl create -f -

Configuration

Put the following configuration in values.yaml, ensure to change all the places with VMIPADDRESS to the address of the VM:

secrets:
    # Password for the cluster
    CLUSTER_ADMIN_PASSWORD: changeme

    # Password for SCF to authenticate with UAA
    UAA_ADMIN_CLIENT_SECRET: uaa-admin-client-secret
env:
    # Domain for SCF. DNS for *.DOMAIN must point to the a kube node's (not master)
    # external ip. This must match the value passed to the
    # cert-generator.sh script.
    DOMAIN: VMIPADDRESS.nip.io

    # UAA host/port that SCF will talk to. If you have a custom UAA
    # provide its host and port here. If you are using the UAA that comes
    # with the SCF distribution, simply use the two values below and
    # substitute the cf-dev.io for your DOMAIN used above.
    UAA_HOST: uaa.VMIPADDRESS.nip.io
    UAA_PORT: 2793
kube:
    # The IP address assigned to the kube node pointed to by the domain. The example value here
    # is what the vagrant setup assigns, you will likely need to change it.
    external_ips:
    - VMIPADDRESS
    storage_class:
            # Make sure to change the value in here to whatever storage class you use
            persistent: persistent
    # The next line is needed for CaaSP 2, but should _not_ be there for CaaSP 1
    auth: rbac

Installing UAA

helm install helm/uaa-opensuse/ -n uaa --namespace uaa --values values.yaml

Ensure UAA is ready 1/1 (k get pods : can show you this) before continuing.

Copying the cert

# Put this in a variable for the next statement
# Use kubectl get secrets --namespace uaa to find version and revision to apply
CA_CERT="$(kubectl get secret secrets-VERSION-REVISION --namespace uaa -o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

Installing CF

helm install helm/cf-opensuse/ -n scf --namespace scf --values values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}"

Installing Web UI

helm install console --name console --namespace ui --values values.yaml

Connecting to SCF

Note: Only works from within the VM or the host running the VM if NAT, external access is only possible with a bridged address or some custom ssh tunnelling

cf api --skip-ssl-validation https://api.VMIPADDRESS.nip.io
cf login -u admin -p changeme

Connecting to UI

Note: Only works from within the VM or the host running the VM if NAT, external access is only possible with a bridged address or some custom ssh tunnelling

First we need to get the port that it's running on

# Retrieve the list of services for the UI
k get svc ui:

# Sample output
# NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
# console-mariadb   ClusterIP   10.254.30.178   <none>        3306/TCP         3m
# console-ui-ext    NodePort    10.254.30.135   10.9.170.14   8443:31174/TCP   3m

We need to look for the port of the console-ui-ext service, and we need its external port and IP. In this example what we would enter into our web browser would be the following:

CONSOLE_PORT=$(k get svc ui:console-ui-ext -o jsonpath='{.spec.ports[1].nodePort}')
CONSOLE_IP=$(k get svc ui:console-ui-ext -o jsonpath='{.spec.externalIPs[0]}')
echo https://${CONSOLE_IP}:${CONSOLE_PORT}

Ports for IaaS use

TCP Ports: 22, 80, 443, 2222, 2793, 4443, 20000-20008