Skip to content
Samuel Ortiz edited this page Jun 13, 2016 · 23 revisions

#Cluster Configuration In order to run properly, typical clusters need to be provisioned with configuration data. Services URLs, credentials, backends and component properties, TLS certificates paths or services availability are only a few examples of configuration data that a cluster may need to access in order to operate at all.

In clusters where the software architecture is distributed and fragmented, the overall configuration data set is spread around many components over many different machines. Provisioning such clusters proves to be challenging and adds one more hurdle towards deployment and installation ease.

Ciao aims for being as simple to deploy and setup as possible and leverages its integrated architecture for building a minimal and centralized configuration framework.

#Design Goals The ciao configuration architecture design goals is two-fold:

  1. Minimization: The complete configuration data set should be as small as possible.
  2. Centralization: The configuration data should be provisioned in one single entity, on one single physical machine.

#Architecture Overview Ciao's cluster configuration is stored and fetched from a cluster specific storage back-end, with which the ciao configuration package interacts. Supported back-ends are plain local file, etcd[WIP] and ZooKeeper[WIP]. In order to configure a ciao cluster, one only needs to provision the configuration back-end with the right cluster configuration data. This data is then propagated to all ciao components by the ciao-scheduler.

The ciao-scheduler is the only component in a ciao cluster that interacts with the configuration back-end, by calling into the configuration package API. As a consequence it needs to be given a configuration backend URI through its -configuration-uri command line option. The default value is file:///etc/ciao/configuration.yaml.

The scheduler is responsible for 2 configuration tasks:

  1. Propagating the configuration data to all its clients (Launchers, controllers, CNCI) for them to configure themselves as soon as they successfully connect to the scheduler.
    • SSNTP clients that depends on cluster specific configuration data should initialize themselves after successfully connecting to the scheduler.
    • The scheduler initially fetches the configuration data by calling configuration.ExtractBlob() and then adds it to all CONNECTED SSNTP frames it sends to successfully connected clients.
  2. Accepting, validating and storing configuration changes from the ciao-controller
    • A cluster privileged operator may need to reconfigure the ciao cluster and should be able to update the configuration data.
    • The controller sends configuration updates through the CONFIGURE SSNTP command.
    • Invalid configuration data updates will not be propagated.
    • Valid configuration data updates will be stored by the configuration package in the configuration storage, and then propagated to all the scheduler clients.

##Configuration data propagation While the ciao configuration data is centrally stored and handled by the scheduler, it needs to be propagated to all ciao components as they do not have access to any piece of configuration information or data.

The configuration data is propagated through 2 SSNTP frames:

  1. The CONNECTED status frame that is generated and sent by the scheduler upon successful client connection. The CONNECTED payload is marshalled YAML that represents the configuration data. After succesfully connecting to the scheduler, SSNTP clients will then call the ssntp.Client.ClusterConfiguration() API in order to fetch the actual configuration payload, parse it and initialize themselves.
  2. The CONFIGURE command frame carries the same payload as the CONNECTED frame. It can be generated and sent by 2 components:
    • The controller when it wants to announce a cluster configuration change.
    • The scheduler receives the CONFIGURE payload from the controller for a cluster configuration change. After validating and storing those changes, it will also generate CONFIGURE frames to be sent to all its connected clients.

#Configuration Data The ciao configuration data is a YAML serialized blob that holds configuration for the scheduler, controller, launcher and for several OpenStack specified services that ciao interacts with (e.g. Keystone, Glance or Cinder):

configure:
  scheduler:
    storage_type: string [file, etcd, zookeeper]
    storage_uri: string [The storage URI path]
  controller:
    compute_port: int
    compute_ca: string [The HTTPS compute endpoint CA]
    compute_cert: string [The HTTPS compute endpoint private key]
    identity_user: string [The identity (e.g. Keystone) user]
    identity_password: string [The identity (e.g. Keystone) password]
  launcher:
    compute_net: [string, string...] [The launcher compute network sub-nets list]
    mgmt_net: [string, string...] [The launcher management network sub-nets list]
    disk_limit: bool
    mem_limit: bool
  image_service:
    type: string [The image service type, e.g. glance]
    url: string [The image service URL]
  identity_service:
    type: string [The identity service type, e.g. keystone]
    url: string [The identity service URL]

All ciao clients will receive the complete YAML payload either through the CONNECTED or the CONFIGURE SSNTP frames. If they need it to configure or re-configure themselves they will have to call into the ssntp.Client.ClusteConfiguration() API to fetch a configuration Golang structure. This call will always give them the last received configuration data.

##Launcher configuration The launcher configuration specific section:

 launcher:
    compute_net: [string, string...] [The launcher compute network sub-nets list]
    mgmt_net: [string, string...] [The launcher management network sub-nets list]
    disk_limit: bool
    mem_limit: bool

is used by the ciao-launcher code to configure its memory and disk limit, but also to set the launcher networking up.

##Controller configuration The controller specific section:

controller:
    compute_port: int
    compute_ca: string [The HTTPS compute endpoint CA]
    compute_cert: string [The HTTPS compute endpoint private key]
    identity_user: string [The identity (e.g. Keystone) user]
    identity_password: string [The identity (e.g. Keystone) password]

is used by the ciao-controller code to start the compute HTTPS endpoints but also to interact with an Openstack identity service.

#Storage Back-ends The ciao configuration package only implements the logic for fetching, storing, validating and manipulating configuration data. It does not implement configuration physical storage but instead relies on configuration storage back-end drivers.

Currently supported configuration storage backends are:

  • Local file: The local file storage URI should follow the file://[absolute path to the configuration file] scheme.
  • Etcd [WIP]
  • ZooKeeper [WIP]

##Local file The currently supported local file storage back-end driver stores and fetches a YAML local file that contains the configuration data. The default URI for this file is file:///etc/ciao/configuration.yaml.

ciao@ciao-cluster ~ $ cat /etc/ciao/configuration.yaml 
configure:
  scheduler:
    storage_type: file
    storage_uri: /etc/ciao/configuration.yaml
  controller:
    compute_ca: /etc/pki/ciao/csr_cert.pem
    compute_cert: /etc/pki/ciao/csr_key.pem
    identity_user: controller
    identity_password: hello
  launcher:
    compute_net: [192.168.1.0/24]
    mgmt_net: [192.168.1.0/24]
    disk_limit: false
    mem_limit: true
  identity_service:
    type: keystone
    url: https://ciao-cluster:5000
  image_service:
    type: glance
    url: https://ciao-cluster:9292

##Etcd Work in progress - TBD ##ZooKeeper Work in progress - TBD