Skip to content

Commit

Permalink
(╯°□°)╯︵ ┻━┻
Browse files Browse the repository at this point in the history
  • Loading branch information
paulczar committed Jun 20, 2017
0 parents commit f87ded6
Show file tree
Hide file tree
Showing 906 changed files with 40,236 additions and 0 deletions.
12 changes: 12 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
*.pyc
*.vdi
.vagrant
.tox
build
*.DS_Store
*-openrc.sh
*.retry
ursula.log
elk-stats
.ssh_config
*.log
34 changes: 34 additions & 0 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# CONTRIBUTORS

The following people contributed to this project before it was opensourced
and history was removed to protect the innocent. If you feel you should be
added to this list, please PR it.

## Leads

* Paul Czarkowski
* Myles Steinhauser

## Core Team

* Craig Tracey
* Tom Spoonemoore
* Josh Yotty
* Michael Sambol
* Zach Sais
* Brian Richardson

## Extended Team

* Jesse Keating
* Tim Chavez
* Nicola Heald

## Others

* Ryan Miller
* Terry Penner
* Priya Ingle
* Leslie Lundquist
* Dustin Lundquist
* Jesse Keating
13 changes: 13 additions & 0 deletions LICENSE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
Copyright 2017 IBM

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
127 changes: 127 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# Cuttle

_Originally called Site Controller (sitectl) and is Pronounced "Cuddle"._

_insert logo of a squid/cuttlefish cuddling a server_

A Monolithic Repository of Composable Ansible Roles for building an SRE Operations Platform.

Originally built by the BlueBox Cloud team to install the infrastructure required to build and
support Openstack Clouds using [Ursula](http://github.com/blueboxgroup/ursula) it quickly grew into
a larger project for enabling SRE Operations both in the Datacenter and in the Cloud.

Like Ursula, [Ursula](http://github.com/blueboxgroup/ursula) Cuttle uses the
[ursula-cli](https://github.com/blueboxgroup/ursula-cli) ( installed via requirements.txt )
for running Ansible on specific environments.

For a rough idea of how Blue Box uses Cuttle by building Central and Remote sites
tethered together with IPSEC VPNs check out [architecture.md](architecture.md).

You will see a number of example Ansible Inventories in `envs/example/` that
show Cuttle being used to build infrastructure to solve a number of problems.
`envs/example/sitecontroller` shows close to a full deployment, whereas
`envs/example/mirror` or `envs/example/elk` to build just specific components.
All of these environments can easily be deployed in Vagrant by using the `ursula-cli`
(see [Example Usage](#example-usage) ).

How to Contribute
-----------------

See [CONTRIBUTORS.md](CONTRIBUTORS.md) for the original team.

The official git repository of Site Controller is https://github.com/IBM/cuttle.
If you have cloned this from somewhere else, may god have mercy on your soul.

### Workflow

We follow the standard github workflow of Fork -> Branch -> PR -> Test -> Review -> Merge.

The Site Controller Core team is working to put together guidance on contributing and
governance now that it is an opensource proect.

Development and Testing
-----------------------

### Build Development Environment

```
# clone this repo
$ git clone [email protected]:ibm/cuttle.git
# install pip, hopefully your system has it already
# install virtualenv
$ pip install virtualenv
# create a new virtualenv so python is happy
$ virtualenv --no-site-packages --no-wheel ~/<username>/venv
# activate your new venv like normal
$ source ~/<username>/venv/bin/activate
# install ursula-cli, the correct version of ansible, and all other deps
$ cd cuttle
$ pip install -r requirements.txt
# run ansible using ursula-cli; or ansible-playbook, if that's how you roll
$ ursula envs/example/<your env> site.yml
# decactivate your virtualenv when you are done
$ deactivate
```

[Vagrant](https://www.vagrantup.com/) is our preferred Development/Testing framework.

### Example Usage

ursula-cli understands how to interact with vagrant using the `--provisioner` flag:

```
$ ursula --provisioner=vagrant envs/example/sitecontroller bastion.yml
$ ursula --provisioner=vagrant envs/example/sitecontroller site.yml
```

### Tardis and Heat

You can also test in Tardis with Heat Orchestration. First, grab your stackrc file from Tardis:

`Project > Compute > Access & Security > Download OpenStack RC File`

Ensure your `ssh-agent` is running, then source your stackrc and run the play:
```
$ source <username>-openrc.sh
$ ursula --ursula-forward --provisioner=heat envs/example/sitecontroller site.yml
```

Add argument `--ursula-debug` for verbose output.

## Run behind a docker proxy for local dev

```
$ docker run \
--name proxy -p 3128:3128 \
-v $(pwd)/tmp/cache:/var/cache/squid3 \
-d jpetazzo/squid-in-a-can
```

then set the following in your inventory (`vagrant.yml` in `envs/example/*/`)

```
env_vars:
http_proxy: "http://10.0.2.2:3128"
https_proxy: "http://10.0.2.2:3128"
no_proxy: localhost,127.0.0.0/8,10.0.0.0/8,172.0.0.0/8
```

Deploying
---------

To actually deploy an environment you would use ursula-cli like so:

```
$ ursula ../sitecontroller-envs/sjc01 bastion.yml
$ ursula ../sitecontroller-envs/sjc01 site.yml
# targetted runs using any ansible-playbook option
$ ursula ../ursula-infra-envs/sjc01 site.yml --tags openid_proxy
```
85 changes: 85 additions & 0 deletions Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'

# INFRA_PLAYBOOK = ENV['INFRA_PLAYBOOK'] || abort("Please specify INFRA_PLAYBOOK env variable")

ANSIBLE_ARGS = ENV['ANSIBLE_ARGS'] ? ENV['ANSIBLE_ARGS'].split() : []

if File.file?('.vagrant/vagrant.yml')
SETTINGS_FILE = ENV['SETTINGS_FILE'] || '.vagrant/vagrant.yml'
else
SETTINGS_FILE = ENV['SETTINGS_FILE'] || 'vagrant.yml'
end

SETTINGS = YAML.load_file SETTINGS_FILE

BOX_URL = SETTINGS['default']['box_url'] || 'http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-14.04_chef-provisionerless.box'
BOX_NAME = SETTINGS['default']['box_name'] || 'ubuntu-trusty'


# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.

# Every Vagrant virtual environment requires a box to build off of.

# default is a small vm
config.vm.box = BOX_NAME
config.vm.box_url = BOX_URL
config.vm.provider "virtualbox" do |v|
v.memory = SETTINGS['default']['memory']
v.cpus = SETTINGS['default']['cpus']
v.gui = SETTINGS['default']['gui']
end
config.ssh.insert_key = false
config.ssh.forward_agent = true

SETTINGS['vms'].each do |name,vm|
config.vm.define name do |c|
c.vm.hostname = name
if vm['ip_address'].is_a? String
ip_addresses = [vm['ip_address']]
else
ip_addresses = vm['ip_address']
end
ip_addresses.each do |ip|
c.vm.network :private_network, ip: ip
end
if vm.has_key?('memory') || vm.has_key?('cpus')
c.vm.provider "virtualbox" do |v|
v.memory = vm['memory'] if vm.has_key?('memory')
v.cpus = vm['cpus'] if vm.has_key?('cpus')
if vm.has_key?('gui')
v.gui = vm['gui']
end
end
end

# allow vagrant provision to run
config.vm.provision "fix-no-tty", type: "shell" do |s|
s.privileged = false
s.inline = "sudo sed -i '/tty/!s/mesg n/tty -s \\&\\& mesg n/' /root/.profile"
end
# performance booster for VMs running on SSDs
c.vm.provision "shell", inline: "echo noop > /sys/block/sda/queue/scheduler"
end
end

if SETTINGS.has_key?('ansible')
config.vm.provision "ansible" do |ansible|
ansible.playbook = 'site.yml'
ansible.extra_vars = 'envs/example/defaults.yml'
ansible.verbose = 'vvvv' if ENV['DEBUG']
ansible.limit = 'all'
ansible.raw_arguments = ANSIBLE_ARGS
ansible.sudo = true
ansible.groups = SETTINGS['ansible']['groups']
end
end

end
27 changes: 27 additions & 0 deletions ansible.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
[defaults]
roles_path = roles
hash_behaviour = merge
nocows = 1
nocolor = 0

timeout = 60
forks = 25
transport = ssh
host_key_checking = False

vars_plugins = plugins/vars
connection_plugins = plugins/connection
callback_plugins = plugins/callbacks
filter_plugins = plugins/filters

log_path = ursula.log

var_defaults_file = ../defaults.yml
ansible_managed = This file managed by ursula. Any changes made will be overwritten.
retry_files_enabled = False

# Required so `sudo: yes` does not lose the environment variables, which hold the ssh-agent socket
sudo_flags=-HE

[ssh_connection]
pipelining=true
Loading

0 comments on commit f87ded6

Please sign in to comment.