-
Notifications
You must be signed in to change notification settings - Fork 0
AddingTest
Adding a test is probably the easiest development activity to do.
Each test is completely contained in it's own subdirectory (either in client/tests for client-side tests or server/tests for server-side tests) - the normal components are
- An example control file (eg tests/mytest/control)
- A test wrapper (eg (tests/mytest/mytest.py)
- Some source code for the test (if it's not all done in just the python script)
Start by taking a look over an existing test (eg tests/dbench). First, note that the name of the subdirectory (tests/dbench), the test wrapper (dbench.py) and the name of the class inside the test wrapper (dbench), all match. Make sure you do this in your new test too ;-)
The control file is trivial, just
job.run_test('dbench')
That just takes the default arguments to run dbench - mostly, we try to provide sensible default settings to get you up and running easily, then you can override most things later.
There's a tarball for the source code (dbench-3.04.tar.gz) - this will get extracted under src/ later. Most of what you're going to have to do is in the python wrapper. Look at dbench.py - you'll see it inherits from the main test class, and defines a version (more on that later). You'll see four functions:
- initialize() - This is run before everything, every time the test is run.
- setup() - This is run when you first use the test, and normally is used to compile the source code.
- run_once() - This is called by job.run_test N times, where N is controlled by the iterations parameter to run_test (defaulting to one). It also gets called an additional time with profilers turned on, if you have any profilers enabled.
- postprocess_iteration() - This processes any results generated by the test iteration, and writes them out into a keyval. It's generally not called for the profiling iteration, as that may have different performance.
- postprocess() - [DEPRECATED] This is called once to do postprocessing of test iterations, after all iterations are complete. Please use postprocess_iteration instead.
The test will result in a PASS, unless you throw an exception, in which case it will FAIL (error.TestFail?), WARN (error.TestWarn?) or ERROR (anything else). Most things that go wrong in Python will throw an exception for you, so you don't normally have to worry about this much - you can check extra things and throw an exception if you need. Now let's look at those functions in some more detail.
This is the one-off setup function for this test. It won't run again unless you change the version number (so be sure to bump that if you change the source code). In this case it'll extract dbench-3.04.tar.gz into src/, and compile it for us. Look at the first few lines:
# http://samba.org/ftp/tridge/dbench/dbench-3.04.tar.gz def setup(self, tarball='dbench-3.04.tar.gz'): tarball = utils.unmap_url(self.bindir, tarball, self.tmpdir)
A comment saying where we got the source from. The function header - defines what the default tarball to use for the sourcecode is (you can override this with a different dbench version from the control file if you wanted to, but that's highly unusual). Lastly there's some magic with unmap_url - that's just incase you overrode it with a url - it'll download it for you, and return the local path ... just copy that bit.
utils.extract_tarball_to_dir(tarball, self.srcdir) os.chdir(self.srcdir) utils.system('./configure') utils.system('make')
OK, so this just extracts the tarball into self.srcdir (pre-setup for
you to be src/ under the test), cd's into that src dir, and runs
./configure; make
just as you would for most standard compilations.
Note that we use the local system() wrapper, not os.system() - this will
automatically throw an exception if the return code isn't 0, etc.
That's all!
This actually executes the test. The core of what it's doing is just:
self.results.append(utils.system_output(cmd))
Which says "run dbench and add the output to self.results". We need to record the output so that we can process it after the test runs in postprocess.
For performance benchmarks, we want to produce a keyval file of "key=value" pairs, describing how well the benchmark ran. The key is just a string, and the value is a floating point (or integer) number. For dbench, we produce just two performance metrics - "throughput" and "nprocs". The function is called once per iteration (except for the optional profiling iteration), and we end up with a file that looks like this:
throughput = 217 nprocs = 4 throughput = 220 nprocs = 4 throughput = 215 nprocs = 4
Note that the above was from a run with three iterations - we ran the benchmark 3 times, and thus print three sets of results. Each set is separated by a blank line.
These methods aren't implemented in the dbench test, but they can be implemented if you need to take advantage of them.
For performance tests that need to conduct any pre-test priming to make the results valid. This is called by job.run_test before running the test itself, but after all the setup.
Used for any post-test cleanup. If test may have left the machine in a broken state, or your initialize made a large mess (e.g. used up most of the disk space creating test files) that could cause problems with subsequent tests then it's probably a good idea to write a cleanup that undoes this. It always gets called, regardless of the success or failure of the test execution.
Used for executing the test, by calling warmup, run_once and postprocess. The base test class provides an implementation that already supports profilers and multiple test iterations, but if you need to change this behavior you can override the default implementation with your own. Note that if you want to properly support multi-iteration tests and/or profiling runs, you must provide that support yourself in your custom execute implementation.
Now just create a new subdirectory under tests, and add your own control file, source code, and wrapper. It's probably easiest to just copy dbench.py to mytest.py, and edit it - remember to change the name of the class at the top though.
If you have any problems, or questions, drop an email to the mailing list ( autotest@… ), and we'll help you out.