Scripts and configuration for Concent deployment
The nginx-storage
pod assumes that an ext4-formatted persistent disk with name defined by the nginx_storage_disk
variable in var.yml
is provisioned and mounts it in read-write mode.
To provision such a disk for the development cluster use the following command:
gcloud compute disks create --size 30GB <disk name>
Before creating a new cluster in GKE, a PosgreSQL database and role have to be created for it. This operation requires privileges for creating and deleting arbitrary databases in Cloud SQL. For security reasons the role used to access the database from within the cluster should not have such wide privileges and this step needs to be performed outside of cluster deployment process.
cd concent-deployment/cloud/
ansible-playbook create-databases.yml \
--extra-vars cluster=$cluster \
--inventory ../../concent-deployment-values/ansible_inventory \
--user $user
Scripts in this repository allow you to build containers and cluster configuration in three different scanarios. Each one has its own requirements:
- Build on your local machine. To do this you need to run Linux and install all the packages required to build and deploy containers. You build containers by running Makefiles and deploy with shell scripts. This mode is meant solely for deploying to the test cluster in development.
- Build inside the
concent-builder-vm
virtual machine. In this scenario all you need is Vagrant and VirtualBox. You use Ansible to configure the machine and then run playbooks that take care of executing all build and deployment steps. This mode is meant for development and for testing configuration changes meant for theconcent-builder
server itself. - Build on the remote
concent-builder
server. In this senario you run the Ansible playbooks on a remote machine. You obviously need access that machine to do this. This is the recommended way to deploy in production.
In every scenario you need local copies of concent-deployment
and concent-deployment-values
repositories:
git clone [email protected]:golemfactory/concent-deployment.git
git clone [email protected]:golemfactory/concent-deployment-values.git
The above assumes that the $cluster
shell variable is set to the name of the cluster you're deploying to and that concent-deployment-values
contains the configuration for that cluster.
Passwords and keys required for deployment are not stored in the repository. To deploy you need to get access to them and put them in the following locations:
concent-secrets/$cluster/concent-builder-service-private-key.json
concent-secrets/$cluster/secrets.py
concent-secrets/$cluster/var-secret.yml
The nginx instances need certificates and private keys to be able to serve HTTPS traffic. Put them in the following locations:
concent-secrets/$cluster/nginx-proxy-ssl.crt
concent-secrets/$cluster/nginx-proxy-ssl.key
concent-secrets/$cluster/nginx-storage-ssl.crt
concent-secrets/$cluster/nginx-storage-ssl.key
It's best if your certificates are signed by a Certificate Authority (CA) because then it's possible for the client to verify their authenticity without having to know the public key ahead of time. This is not required though. You can generate and use a self-signed certificate.
Here's an example command that generates a 2048-bit RSA certificate valid for a year:
openssl req \
-x509 \
-nodes \
-sha256 \
-days 365 \
-newkey rsa:2048 \
-keyout nginx-proxy-ssl.key \
-out nginx-proxy-ssl.crt \
-config extensions.cnf
extensions.cnf
file
[req]
default_bits = 2048
prompt = no
default_md = sha256
x509_extensions = x509_ext
distinguished_name = dn
[dn]
C = <country>
ST = <state>
O = <organization>
OU = <organization_unit>
CN = <domain_name>
emailAddress = <email_address>
[x509_ext]
basicConstraints = CA:FALSE
subjectAltName = @alt_names
subjectKeyIdentifier = hash
[alt_names]
DNS.1 = <domain_name>
Replace <country>
, <state>
, <organization>
, <organization_unit>
, <domain_name>
, <email_address>
with actual values.
Do this if you want to use the virtual machine for deployment.
-
Install Vagrant, VirtualBox and Ansible using your system package manager.
-
Create and configure the virtual machine:
cd concent-deployment/concent-builder-vm vagrant up
This will run the
configure.yml
playbook for you.
Do this if you want to use the remote server for building and deploying.
-
Run the
configure.yml
playbook.cd concent-deployment/concent-builder/ ansible-playbook configure.yml \ --inventory ../../concent-deployment-values/ansible_inventory \ --user $user
Where the
$user
shell variable contains the name of your shell account on the remote machine.
All the instructions below assume that you're using the remote server.
Before following these instructions, please make sure that the Concent version you're building (i.e. concent_version
in containers/versions.yml
) is listed in concent_versions
dictionary in var-concent-<cluster>.yml
file.
At any given time there can be multiple Concent versions deployed to different clusters within the same environment (e.g. v1.8
and v1.9
on dev
, v1.9
and v2.0
on staging
, etc.) and this dictionary contains configuration values that are not the same for all those clusters.
Without providing configuration values there you won't be able to generate Kubernetes cluster configuration or use Ansible playbooks to deploy to the cluster.
cd concent-deployment/concent-builder/
ansible-playbook install-repositories.yml \
--extra-vars cluster=$cluster \
--inventory ../../concent-deployment-values/ansible_inventory \
--user $user
ansible-playbook build-test-and-push.yml \
--extra-vars cluster=$cluster \
--inventory ../../concent-deployment-values/ansible_inventory \
--user $user
Before you can deploy containers, you need to make sure that certificates, keys and passwords used to configure those containers are available on the cluster. Deploy them with:
cd concent-deployment/cloud/
ansible-playbook cluster-deploy-secrets.yml \
--extra-vars cluster=$cluster \
--inventory ../../concent-deployment-values/ansible_inventory \
--user $user
This step is necessary only when deploying the cluster for the first time or when the secrets change. Secret deployment is separate from deployment of the application specifically so that they can be performed separately, possibly from different machines.
cd concent-deployment/concent-builder/
ansible-playbook deploy.yml \
--extra-vars cluster=$cluster \
--inventory ../../concent-deployment-values/ansible_inventory \
--user $user
This step is necessary only when we move persistent disks or IP address from nginx-proxy on one cluster to another (as specified in var-concent-<cluster>.yml
file in concent-deployment-values
).
That's because when you update the var
file and deploy to a new cluster, the previous cluster still has the disks or the IP attached.
Running this playbook updates the configuration of the old cluster so that the disks and IPs are released.
The new cluster will then claim them automatically.
cd concent-deployment/concent-builder/
ansible-playbook redeploy-nginx-proxy-router.yml \
--extra-vars cluster=$cluster \
--inventory ../../concent-deployment-values/ansible_inventory \
--user $user
concent-api
and other Django apps will try to connect a CloudSQL database configured in their settings.
Control and storage clusters have separate databases that need to be created and migrated individually.
Set $cluster_type
to control
or storage
before proceeding.
These commands are meant to be executed on every cluster separately.
Initialization must only be performed on a newly created cluster or if we want to clear the data and start from scratch:
cd concent-deployment/concent-builder/
ansible-playbook job-cleanup.yml \
--extra-vars cluster=$cluster \
--inventory ../../concent-deployment-values/ansible_inventory \
--user $user
ansible-playbook reset-db.yml \
--extra-vars "cluster=$cluster cluster_type=$cluster_type" \
--inventory ../../concent-deployment-values/ansible_inventory \
--user $user
WARNING: This operation removes all the data from an existing database.
From time to time a Concent update may require making changes to the database schema. This is done using Django migrations. Migrations should be executed after containers with new version have been deployed and all the containers running the old version deleted.
cd concent-deployment/concent-builder/
ansible-playbook job-cleanup.yml \
--extra-vars cluster=$cluster \
--inventory ../../concent-deployment-values/ansible_inventory \
--user $user
ansible-playbook migrate-db.yml \
--extra-vars "cluster=$cluster cluster_type=$cluster_type" \
--inventory ../../concent-deployment-values/ansible_inventory \
--user $user
It's safe to run migrations even if there are no changes - Django will detect that and simply leave the schema as is.
Step-by-step instructions to building and running nginx-storage
locally.
- Install dependencies:
apt-get update
apt-get install make python3 python-pip python3-yaml docker.io
pip install yasha
- Ensure that your user is in the
docker
group and that this group exists. This is necessary to be able to run docker commands withoutroot
privileges. Note that the change will not take effect until you log out of the current shell session.
sudo groupadd docker
sudo usermod --all --groups docker <user>
- Go to
concent-deployment
repository and run makefile to buildnginx-storage
image:
cd <path to concent deployment repository>/containers/
make nginx-storage
- Run nginx-storage:
docker run \
--rm \
--hostname nginx-storage-server \
--network host \
--name nginx-storage \
nginx-storage
- If everything went OK, you should now be able to reach
nginx-storage
on localhost:
curl http://localhost:8001/
In addition to Concent itself, this repository contains files necessary to build the Concent Signing Service.
Makefile
builds a Docker container but also produces a source package that includes Dockerfile
and all source files needed to build it.
The package can be used to build the container without having to set up concent-deployment
.
This is only needed if you want to build the package yourself. If you have received the package and only want to build and run the docker container, you can skip this section.
-
Install dependencies needed to build containers and render configuration files from templates.
Example for Ubuntu:
apt-get update apt-get install make python3 python-pip python3-yaml apt-get install docker.io pip install yasha
-
Ensure that you can start docker containers. It's recommended to add your user to the
docker
group (make sure that this group exists) so that you don't have to do this as root. Note that the change will not take effect until you log out of the current shell session.Example for Ubuntu:
sudo groupadd docker sudo usermod --all --groups docker <your user name>
-
Run make
cd containers/ make concent-signing-service-package
-
Extract the package
-
Go to
signing_service/
directory inside the package and build the Docker imagecd signing_service/ docker build --tag concent-signing-service:$(cat signing-service/RELEASE-VERSION) . docker tag \ concent-signing-service:$(cat signing-service/RELEASE-VERSION) \ concent-signing-service:latest
To run it in a docker container with access to your local network interface, run:
docker run \
--detach \
--env ETHEREUM_PRIVATE_KEY \
--env SIGNING_SERVICE_PRIVATE_KEY \
--env SENTRY_DSN \
--network host \
--hostname signing-service \
--name signing-service \
--volume /var/log/concent/daily_thresholds:/usr/lib/signing_service/signing-service/daily_thresholds \
--restart on-failure \
concent-signing-service \
--concent-cluster-host concent.golem.network \
--concent-public-key 85cZzVjahnRpUBwm0zlNnqTdYom1LF1P1WNShLg17cmhN2Us \
--concent-cluster-port 9055 \
--ethereum-private-key-from-env \
--signing-service-private-key-from-env \
--sentry-dsn-from-env \
--sentry-environment mainnet
This assumes that:
- The service can connect to a Concent cluster at
concent.golem.network:9055
. 85cZzVjahnRpUBwm0zlNnqTdYom1LF1P1WNShLg17cmhN2Us
is Concent's public key encoded in base64.- There's a shell variable called
ETHEREUM_RPIVATE_KEY
and it contains base64-encoded private key of the Concent contract. - There's a shell variable called
SIGNING_SERVICE_PRIVATE_KEY
and it contains base64-encoded private key for signing Golem Messages created by the Signing Service. - There's a shell variable called
SENTRY_DSN
that contains the secret ID that allows submitting crash report to a Sentry project. The one given above is just an example. If so, you need to provide a valid DSN for the project that should receive the reports. Otherwise just skip the--sentry-dsn-from-env
parameter and theSENTRY_DSN
variable. mainnet
is the name of the Sentry environment that should be included in error reports./var/log/concent/daily_thresholds/
is location in the host system where the Signing Service can create daily threshold reports. It must be writable by a user with UID of 999 which is the UID under which the application runs in the container.
Note that the service will crash on errors.
The host system is responsible for restarting it in that case.
If it's running in a Docker container you can easily achieve this with the --restart on-failure
option.
The concent-vm/
directory contains a Vagrant configuration that creates a virtual machine with Concent set up for development.
The machine has multiple purposes:
- It can be used to run and debug Concent tests in a reproducible environment.
- It serves as a reference for setting up Concent development environment.
- It can can be set up to run Golem from source.
You need Vagrant >= 2.2.0. Install it with your system package manager.
The machine runs on VirtualBox. Install it with your system package manager.
VirtualBox provides several kernel modules and requires them to be loaded before you can start any virtual machine.
These modules need to be built for your specific kernel version and rebuilt again whenever you update your kernel.
It's recommended to use DKMS to do this automatically.
Most distributions provide a package named virtualbox-dkms
or virtualbox-host-dkms
that provides module sources and configures your system to build them.
On some systems the modules are not loaded automatically after the installation.
If you can't start a machine, try to load vboxdrv
kernel module manually first:
sudo modprobe vboxdrv
These modules are loaded automatically when the system starts so you should no longer have to do this after the next reboot.
Install vagrant-vbguest
plugin:
vagrant plugin install vagrant-vbguest
concent-vm/Vagrantfile
creates performs basic setup but does not install Concent or Golem.
It installs system packages and starts services that may be needed by either.
This step needs access to concent-deployment
sources.
CONCENT_DEPLOYMENT_VERSION
specifies git branch/tag/commit to use.
Sources are downloaded from Github (it does not copy the code from your local repository).
cd concent-vm/
export CONCENT_DEPLOYMENT_VERSION=master
vagrant up
This step requires two configuration files.
concent-vm/extra_settings.py
is a Python script that will be imported into the automatically generated local_settings.py
in the machine.
You can use it to provide secrets or override default settings.
It can be empty if you're fine with defaults.
concent-vm/signing-service-env.sh
is a shell script meant to be sourced immediately before starting an instance of Concent's Signing Service and can define values of environment variables used by it.
At minimum it should define the following variables:
export ETHEREUM_PRIVATE_KEY="..."
export SIGNING_SERVICE_PRIVATE_KEY="..."
After creating the configuration, it's enough to run the following playbook:
ansible-playbook install-concent.yml \
--extra-vars concent_version=master \
--private-key .vagrant/machines/default/virtualbox/private_key \
--user vagrant \
--inventory inventory
concent_version
parameter determines which branch/tag/commit from the concent
repository will be deployed in the machine.
Version listed in containers/versions.yml
in concent-deployment
repository is used by default.
Golem installation does not require any extra configuration. Just run the following playbook:
ansible-playbook install-golem.yml \
--extra-vars golem_version=develop \
--private-key .vagrant/machines/default/virtualbox/private_key \
--user vagrant \
--inventory inventory
golem_version
parameter determines which branch/tag/commit from the concent
repository will be deployed in the machine.
Version listed in containers/versions.yml
in concent-deployment
repository is used by default.
Please read the Getting Started page in Vagrant docs to get familar with basic operations like starting the machine, logging into it via ssh or destroying it.
The machine provides several scripts that automate common development tasks.
They're located in /home/vagrant/bin/
which is in user's PATH
so you can execute them from any location.
This is a helper script that loads a Python virtualenv with all dependencies required to run Concent and enters the directory that contains a working coy of the concent
repository.
This script is meant to be sourced rather than executed:
source concent-env.sh
Use it when you want to be able to run scripts from the repository (start Concent, run unit tests, etc.). All the helper scripts provided with the machine source this file automatically when needed.
This script starts Concent, including:
- A development Django server (
manage.py runserver
). - 3 Celery worker instances attached to the right queues.
- Signing Service
- Middleman
Migrates the databases, preserving their content.
This script reinitializes Concent, removing all data stored by it so that you can start from scratch:
- Destroys and recreates the databases.
- Migrates the databases.
- Empties RabbitMQ queues.
- Re-creates the superuser account.
- Restarts all the services.
- Does not remove blockchain data.
Updates Concent to the version (tag/branch/commit) specified in the first parameter (master
by default):
- Fetches the latest code from git.
- Checks out the specified version.
- Destroys and recreates the virtualenv.
- Installs Concent dependencies in the virtualenv.
- Migrates the databases.
Similar to concent-env.sh
.
Prepares your shell for work with the Golem working copy checked out in the machine:
- Loads the virtualenv with Golem's dependencies.
- Changes the directory to
golem
.
This script is meant to be sourced rather than executed:
source golem-env.sh
Starts Golem without GUI.
- Sources
golem-env.sh
. - Starts
golemapp
in console mode and passes all the command-line arguments to it.
You can use it to start Golem like this:
golem-run-console-mode.sh \
--accept-terms \
--password $password
$password
needs to contain your Golem password.
You can see all the available golemapp
options by running:
golem-run-console-mode.sh --help
Here's some extra information you should be aware of when using the machine:
Vagrantfile
creates a virtual disk image inconcent-vm/disk/blockchain_disk.vdi
. This image is meant to store the blockchain data needed for geth to connect and use the Ethereum testnet. Since downloading this data can take a while, the machine is configured to never delete it on its own. It always gets detached before you destroy the machine.- This image is quite large (currently 30 GB by default) so make sure you have enough disk space.
You can manually tweak the size in
Vagrantfile
but keep in mind that it has to be big enough to store the whole blockchain. - If you want to start from scratch, you need to remove the file manually. It should not be necessary in normal circumstances. Geth can deal with partially downloaded blockchain.
- This image is quite large (currently 30 GB by default) so make sure you have enough disk space.
You can manually tweak the size in
- The following services are automatically started inside the machine when the it boots:
- Docker
- PostgreSQL (accepts connections from within the machine without a password)
- RabbitMQ (runs in a Docker container)
- Geth (runs in a Docker container)
- nginx (configured to act as
nginx-storage
, built fromconcent-deployment
)
- The initialization playbook automatically creates PostgreSQL databases required by Concent
- Concent, Signing Service and Golem do not start automatically. Since you may want to run only one or the other, you need to start them using the helper scripts listed above.