-
Notifications
You must be signed in to change notification settings - Fork 0
CI Tests
Juju has many Continuous Integration (CI) tests that run whenever new code is checked in. These are long-running integration tests that do actual deploys to clouds, to ensure that juju works in real-world environments.
These instructions assume you're running on Ubuntu, they may be different on other platforms.
First we need to get the code. The CI tests are python scripts that are hosted on launchpad, so we'll use bzr to check them out.
sudo apt-get install bzr
Now we can get the CI tests:
bzr branch lp:juju-ci-tools
cd juju-ci-tools
bzr branch lp:juju-ci-tools/repository
And once we have the code, we can run the Makefile to install all the deps you need to run the tests.
make install-deps
juju-ci-tools
is where the actual tests live, the repository repo is a collection of charms used by some of the tests.
The scripts under juju-ci-tools are generally divided into three categories:
- The CI tests themselves, that get run by jenkins and show pass/fail. These scripts have the prefix "assess", i.e. assess_recovery.py.
- Unit tests of code in the CI tests and the helper scripts. These scripts are in the 'tests' subdirectory and have the prefix "test", i.e. tests/test_assess_recovery.py.
- Helper scripts used by the CI tests. These are generally any file without one of the aforementioned prefixes.
The unit tests are written using python's unittest module.
To run all the tests, run make test
. To run the tests for a particular test file, run python -m unittest <module_name>
. For example, to run the unit tests in test_assess_recovery.py, run python -m unittest tests.test_assess_recovery
.
The CI tests are just normal python files. Their return value indicates success or failure (0 for success, nonzero for failure). You can just run the file, and it'll tell you the arguments it expects. In general, the tests will expect that you have a working juju binary and an environments.yaml file with usable environments. Most of the scripts ask for the path to your local juju binary and the name of an environment in your environments.yaml. The script will use these to bootstrap the indicated environment and run its tests.
If the test needs to deploy a test charm, you'll need to set the JUJU_REPOSITORY environment variable to the path where you checked out lp:juju-ci-tools/repository.
Help can be printed for any test script, for example `./assess_log_rotation.py --help'. To run the assess_log_rotation CI test using the local environment on your machine, the incantation would look like this (note that the './logs' directory must already exist):
~/juju-ci-tools$ mkdir logs; JUJU_REPOSITORY=./repository ./assess_log_rotation.py local $GOPATH/bin/ ./logs local_temp machine
This will bootstrap an environment, deploy a test charm, call some actions on the charm, and then assess the results.
That's it. You've just run your first juju CI test. That's really about it.
If this is your first time, consider asking one of the QA team to pair-program on it with you.
Start by making a copy of template_assess.py.tmpl
, and don't forget unit tests!
Run make lint
early and often. (You may need to do sudo apt-get install python-flake8
). If you forget, you can run autopep8 to fix certain issues. Please use --ignore E24,E226,E123
with autopep8. Code that's been hand-written to follow PEP8 is generally more readable than code which has been automatically reformatted after the fact. By running make lint
often, you'll absorb the style and write nice PEP8-compliant code.
Please avoid creating diffs longer than 400 lines. If you are writing a new test, that may mean creating it as a series of branches. You may find bzr-pipeline to be a useful tool for managing a series of branches.
If your tests require new charms, please write them in Python.
If you have questions or need help, Aaron Bentley from Juju QA has volunteered as a contact person ([email protected] or abentley on IRC).
By using template_assess.py.templ
as a base, many of these requirements will be satisfied automatically.
("must" and "should" are used in the RFC2199 sense.)
Tests should be compatible with all versions of under development, including those that are in maintenance and only receiving bugfixes.
Tests must exit with 0 on success, nonzero on failure.
Tests must accept a path to the juju binary under test. A path including the binary name (e.g. mydir/bin/juju
) is expected. (Some older tests use a path to the directory, but this is deprecated.)
Tests that use an environment must accept an environment name to use, so that they can be run on different substrates by specifying different environments.
Tests that use an environment must permit a temporary runtime environment name to be supplied, so that multiple tests using the same substrate can be run at the same time.
Tests must run juju with test-mode: True
by default, so that they do not artificially inflate statistics. This is handled automatically by jujupy.temp_bootstrap_env
Tests should allow an agent url to be specified, so that a person manually testing a Juju QA revision build does not need to update the agent-url
in their config in order to use the testing streams.
Tests should allow --upload-tools
to be specified, so that a person manually testing a Juju QA or personal build can do so without needing streams.
Tests whose results could vary by series should allow default-series
to be specified.
Tests should depend only on standard JUJU environment variables such as JUJU_HOME
and JUJU_REPOSITORY
. They should not depend on feature flags. Feature flags should only be provided to the juju versions that require them. Ideally, only operations that require feature flags should have them. This means that test code should not supply feature flags. The only code that should be aware of feature flags should be jujupy.EnvJujuClient
and its subclasses.
The Juju QA general coding guidelines are here: https://docs.google.com/document/d/1dL3xdw_UwpH6GpXJIvlwqm9MZ8dY5yABKlfpFw2vLmA/edit
Push your code to Launchpad, create a merge proposal, and ask a member of Juju QA to review it. When you get their review, remember to check for inline comments. When you have addressed all the review comments, push your code to Launchpad and add a comment to the merge proposal.
Once the code has been approved, a member of Juju QA can land it.
This will not happen automatically. The Jenkins config must be updated. Generally you can ask your reviewer to do this.
Your test will probably be implemented as a non-voting test initially, until it clear that the test is reliable enough that its failure should curse a build. If the QA team determines that the test is not reliable enough, they may ask you to update it.
Testing
Releases
Documentation
Development
- READ BEFORE CODING
- Blocking bugs process
- Bug fixes and patching
- Contributing
- Code Review Checklists
- Creating New Repos
-
MongoDB and Consistency
- [mgo/txn Example] (https://github.com/juju/juju/wiki/mgo-txn-example)
- Scripts
- Update Launchpad Dependency
- Writing workers
- Reviewboard Tips
Debugging and QA
- Debugging Juju
- [Faster LXD] (https://github.com/juju/juju/wiki/Faster-LXD)