Skip to content
abentley edited this page Jun 17, 2015 · 21 revisions

Juju has many Continuous Integration (CI) tests that run whenever new code is checked in. These are long-running integration tests that do actual deploys to clouds, to ensure that juju works in real-world environments.

Local Environment Setup

These instructions assume you're running on Ubuntu, they may be different on other platforms.

First we need to get the code. The CI tests are python scripts that are hosted on launchpad, so we'll use bzr to check them out.

sudo apt-get install bzr

Now we can get the CI tests:

bzr branch lp:juju-ci-tools
cd juju-ci-tools
bzr branch lp:juju-ci-tools/repository

And once we have the code, we can run the Makefile to install all the deps you need to run the tests.

make install-deps

juju-ci-tools is where the actual tests live, the repository repo is a collection of charms used by some of the tests.

The Scripts

The scripts under juju-ci-tools are generally divided into three categories:

  • The CI tests themselves, that get run by jenkins and show pass/fail. These scripts have the prefix "assess", i.e. assess_recovery.py.
  • Unit tests of code in the CI tests and the helper scripts. These scripts have the prefix "test", i.e. test_assess_recovery.py.
  • Helper scripts used by the CI tests. These are generally any file without one of the aforementioned prefixes.

Running Unit Tests (tests of the CI testing code)

The unit tests are written using python's unittest module.

To run all the tests, run make test. To run the tests for a particular test file, run python -m unittest <module_name>. For example, to run the unit tests in test_assess_recovery.py, run python -m unittest test_assess_recovery.

Running CI Tests

The CI tests are just normal python files. Their return value indicates success or failure (0 for success, 1 for failure). You can just run the file, and it'll tell you the arguments it expects. In general, the tests will expect that you have a working juju binary and an environments.yaml file with usable environments. Most of the scripts ask for the path to your local juju binary and the name of an environment in your environments.yaml. The script will use these to bootstrap the indicated environment and run its tests.

If the test needs to deploy a test charm, you'll need to set the JUJU_REPOSITORY environment variable to the path where you checked out lp:juju-ci-tools/repository.

For example, to run the assess_log_rotation CI test using the local environment on your machine, the incantation would look like this (note that the './logs' directory must already exist):

~/juju-ci-tools$ JUJU_REPOSITORY=./repository ./assess_log_rotation.py machine $GOPATH/bin/ local ./logs

This will bootstrap an environment, deploy a test charm, call some actions on the charm, and then assess the results.

That's it. You've just run your first juju CI test. That's really about it.

Creating a New CI Test

Follow the example of assess_log_rotation.py, and don't forget unit tests!

Run make lint early and often. (You may need to do sudo apt-get install python-flake8). If you forget, you can run autopep8 to fix certain issues. Code that's been hand-written to follow PEP8 is generally more readable than code which has been automatically reformatted after the fact, so it's preferable to just run make lint, fix the formatting, and you'll absorb the style rules that way.

If your tests require new charms, please write them in Python.

If you have questions or need help, Aaron Bentley from Juju QA has volunteered as a contact person ([email protected] or abentley on IRC).

Landing your code

Push your code to Launchpad, create a merge proposal, and ask a member of Juju QA to review it. When you get their review, remember to check for inline comments. When you have addressed all the review comments, push your code to Launchpad and add a comment to the merge proposal.

Once the code has been approved, a member of Juju QA can land it.

Integrating your test into CI testing

This will not happen automatically. The Jenkins config must be updated. Generally you can ask your reviewer to do this.

Your test will probably be implemented as a non-voting test initially, until it clear that the test is reliable enough that its failure should curse a build. If the QA team determines that the test is not reliable enough, you may find that they ask you to update it.

Clone this wiki locally