-
Notifications
You must be signed in to change notification settings - Fork 655
UnitTests
The unit tests and the test data are bundled together in the package MDAnalyisTests-release. In order to run the tests, this package must be installed in addition to MDAnalysis.
The tests also rely on the nose
and numpy
packages, and require both to run.
Install MDAnalysisTests via
pip install --upgrade MDAnalysisTests
or download the tar file, unpack, and run python setup.py install
.
Find the path to your MDAnalysisTests install and run the tests with
path_to_MDAnalysisTests/mda_nosetests -v --parallel-processes=4 --process-timeout=120
(You can increase the number of parallel processes depending on the number of cores available; with 12 cores, the test suite runs within ~90 seconds.)
The mda_nosetests
test runner can be found under the MDAnalysisTests install directory. You can find it by runnning python -c 'import MDAnalysisTests; print MDAnalysisTests.__path__'
.
nose's own nosetests
can be used instead of mda_nosetests
, with some limited functionality.
To run in serial mode (takes almost 30 mins)
path_to_MDAnalysisTests/mda_nosetests -v
All tests should pass (i.e. no FAIL, ERROR, or MEMLEAK); SKIPPED or KNOWNFAILURE are ok. For anything that fails or gives an error ask on the user mailing list or raise an issue.
All tests should pass (i.e. no FAIL, ERROR, or MEMLEAK); SKIPPED or KNOWNFAILURE are ok. For anything that fails or gives an error fix your code (or raise an issue).
Do not push code that fails to the development branch. Instead, push it to a feature or issue branch. It will run automatically through the unit tests by travis-ci and it will be available for comment and discussion by other developers.
Use the tests from the git source repository, which are located in the testsuite/MDAnalysisTests directory:
cd testsuite/MDAnalysisTests
./mda_nosetests -v --parallel-processes=4 --process-timeout=120
(Try increasing the number of processes; with 24 processes on 12 cores (+hyperthreading) this took ~40 seconds; in serial it takes ~30 min).
You can install MDAnalysisTests
and then run the tests anywhere. Extra functionality, afforded by our nose plugins, is added only if the tests are run through the mda_nosetests
script, or by directly invoking MDAnalysis.tests.test()
(which is what the 3-line mda_nosetests
script does under the hood).
You can run all tests in serial from the commandline, like this
python -c 'from MDAnalysis.tests import test; test(argv=["--exe", "-v"])'
or, equivalently (assuming mda_nosetests
is in your path)
mda_nosetests --exe -v
or from within the Python interpreter: start python
or ipython
and type (the >>>
is the prompt and should not be typed!)
>>> import MDAnalysis.tests
>>> MDAnalysis.tests.test(argv=['--exe', '-v'])
nose's nosetests
script can also be used (just make sure you are running the right version)
nosetests --exe -v MDAnalysisTests
but you'll miss out on neat knownfailure
output, stderr
silencing, and the ability to test memleaks. See below for a detailed comparison of nosetests
and mda_nosetests
. Any flags accepted by nosetests
can also be passed to mda_nosetests
or to the argv
argument of MDAnalysis.tests.test()
.
(The flag --exe
, or argv=['--exe']
ensures that the tests also run on Linux, see below for details.) The tests take a few minutes. Check that you only get ok (shown as a dot, ".") or known failures (letter "K"). "DeprecationWarning" and a "RuntimeWarning" are not a problem. Failures (letter "F"), Errors (letter "E"), or Memleaks (letter "M") are bad. If you cannot figure out for yourself where the problems come from, ask a question on the discussion group, including your error output and notes on which version of MDAnalysis and operating system you're using.
Fore more details see below.
Examples for output in various modes. Note that here the "not verbose" mode is mostly used. For debugging, verbose mode is more useful as one can identify failing tests while they are running.
For example, a successful test might look like the following
>>> MDAnalysis.tests.test()
......S...S............................................................................................................................................................K.KK...........................................................................................................................................................................K...............................................................................................................................................................................................................K..............................................................K............................................................................................................................................................................................................................................................................................................................................................................K......K....................K.....................K...................K....................K...........................K...................K...................K.............................................................K...................K........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
----------------------------------------------------------------------
Ran 1772 tests in 266.341s
OK (KNOWNFAIL=17, SKIP=2)
Running tests in parallel is much faster, especially on an multi-core machine. As an example for a 12-core machine:
import MDAnalysis.tests
>>> MDAnalysis.tests.test(argv=["--processes=12", "--process-timeout=120"])
S.S...........................................................................................................................................................................................................................................................................................................................................K.........................................................................................K..........................................................................K.....................................................................................................................................................................................................................................................K.KK...................................................................................................................................K..................K.....................K...................K..........................K............................K..................K.............................K.................................................................................K...................K.................................................................................................................K................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
----------------------------------------------------------------------
Ran 1791 tests in 44.828s
OK (KNOWNFAIL=17, SKIP=2)
Beware that parallel unit tests tend to fail due to timeouts unless the process-timeout
flag is used.
See also: official docs for the process timeout flag
We test code coverage of the unit tests with the coverage plugin of nose. Currently, this is done automatically as part of the Travis job on Python 2.7 and is viewable on coveralls.
If you want to generate a coverage report manually you can run
cd testsuite
rm -f .coverage .noseids testing.log
./mda_nosetests -v --with-id \
--with-coverage --cover-erase --cover-html-dir=htmlcov --cover-html --cover-package=MDAnalysis \
MDAnalysisTests/test_*.py \
2>&1 | tee testing.log
We are borrowing some of NumPy's testing frame work; thus, numpy must be installed for the tests to run at all. The tests require at least numpy 1.5.
Run all the tests with
import MDAnalysis.tests
MDAnalysis.tests.test()
Some tests can take a few seconds; in order to skip the slow tests run
MDAnalysis.tests.test(label='fast')
Additional information is displayed at a higher verbosity level (the default is 0):
MDAnalysis.tests.test(label='fast', argv=['--verbosity=1'])
Note that if no tests are being run then one might have to run the
tests with the --exe
flag
MDAnalysis.tests.test(label='fast', argv=['--exe'])
(This happens when python files are installed with the executable bit set. By default the nose testing framework refuses to use those files and must be encouraged to do so with the --exe
switch.)
See nose commandline options for additional options that can be used; for instance, code coverage can also be checked:
MDAnalysis.tests.test(argv=['--exe', '--with-coverage'])
Instead of running tests from within python, one can also run them via the mda_nosetests
script that ships with MDAnalysisTests. With version 0.11 the test subsystem was overhauled to allow the incorporation of customized nose plugins. In order for them to work tests must be invoked via our own wrapper function MDAnalysis.tests.test()
. This is what the mda_nosetests
script does for you. Alternatively, you can call MDAnalysis.tests.test()
from the interpreter. mda_nosetests
strives to be compatible and interchangeable with nose's nosetests script, with added functionality.
Go into the tests directory (or the package root)
cd /testsuite/MDAnalysisTests
and invoke ./mda_nosetests directly to run all tests on two processors in parallel ("%
" is the shell prompt and should not be typed):
% ./mda_nosetests --processes=2 --process-timeout=120
(When the -v
flag is added, more verbose output is produced.)
The mda_nosetests
script can be run from anywhere. It will default to testing the MDAnalysisTest package, if no other target is given.
When you have written a new unit test it is helpful to check that it passes without running the entire suite. For example, in order to test everything in, say, test\_selections.py
run
% ./mda_nosetests test_selections
..............
----------------------------------------------------------------------
Ran 14 tests in 3.421s
OK
One can also test individual test classes. For instance, after working on the XYZReader one can check just the TestCompressedXYZReader tests with
% ./mda_nosetests test_coordinates:TestCompressedXYZReader
....
----------------------------------------------------------------------
Ran 4 tests in 0.486s
OK
where we are testing the class TestCompressedXYZReader
which can be found in the module (file) test\_coordinates.py
.
If you just installed the MDAnalysisTests
package you can also simply run
% path/to/MDAnalysisTests/mda_nosetests -v
Setuptools can also use nose directly (and it takes care of having all the libraries in place):
python setup.py nosetests
If you have the coverage package installed, you can also check code coverage of the tests:
python setup.py nosetests --with-coverage --cover-package=MDAnalysis --cover-erase --cover-tests
mda_nosetests
and MDAnalysis.tests.test()
were designed to blend as much as possible with the standard use of nose's nosetests
script. Any flags accepted by nosetests
can also be passed to mda_nosetests
or to the argv
argument of MDAnalysis.tests.test()
.
Extra flags are available for the plugins:
-
--with-memleak
: enable test-by-test memleak checking; -
--no-errorcapture
: disable stderr silencing; -
--no-knownfail
: disable special treatment ofKnownFailureTest
exceptions, which will then be reported as regular failures.
Additionally, MDAnalysis.tests.test()
no longer calls numpy's test wrapper, the default of which was to request all stdout to be printed (resulting in quite noisy tests). To enable stdout output with mda_nosetests
use the standard nose flag -s
.
Tests can still be run with nose's nosetests
. In this case the above plugins are disabled (knownfailure
will then default to skipping a test; it won't make it fail). Other than that lack of functionality and output testing should behave the same.
Finally, the default behavior of mda_nosetests
when called without a target package/test is to test the MDAnalysisTest package. This, of course, differs from the behavior of nosetests
.
Up to MDAnalysisTests version 0.11 numpy's wrapper was used to run tests when invoked through MDAnalysis.tests.test()
(but not through nosetests
). This is now replaced by our own wrapper.
Main differences are that numpy-specific arguments to MDAnalysis.tests.test()
are now either emulated or simply not implemented. Here's a list of the ones most commonly used with MDAnalysisTests:
-
label
: this allows the selection of tests based on whether or not they were decorated with the@dec.slow
decorator. Current behavior recognizes onlylabel='fast'
. Any other label (or its absence) defaults to running all tests. -
verbose
: this argument is no longer accepted. Pass one of-v
,--verbose
, or--verbosity=n
inargv
; -
extra_argv
: this argument allows an extra list of flags to be passed to nose. It is still accepted, but deprecated in favor of nose's identicalargv
argument.
Any other numpy-specific arguments will not be accepted and will cause the test run to fail.
Under numpy the behavior was not to silence any test stdout output. The behavior now is to silence it; this can be reversed with the -s
flag.
The simulation data used in tests are all released under the same license as MDAnalysis or are in the Public Domain (such as PDBs from the Protein Databank). An incomplete list of sources:
- from Beckstein et al. (2009) (
adk.psf
,adk_dims.dcd
)- adk_dims Trajectory of a macromolecular transition of the enzyme adenylate kinase between a closed and an open conformation. The simulation was run in CHARMM c35a1.
- unpublished simulations (O. Beckstein)
- adk_oplsaa Ten frames from the first 1 ns of a equilibrium trajectory of AdK in water with Na+ counter ions. The OPLS/AA forcefield is used with the TIP4P water model. The simulation was run with Gromacs 4.0.2.
- contributions from developers and users
- Protein Databank
- O. Beckstein, E.J. Denning, J.R. Perilla and T.B. Woolf, Zipping and Unzipping of Adenylate Kinase: Atomistic Insights into the Ensemble of Open-Closed Transitions. J Mol Biol 394 (2009), 160--176, doi:10.1016/j.jmb.2009.09.009
The tests are in a separate package, together with any data files required for running the tests (see Issue 87 for details). Whenever you add a new feature to the code you should also add a test case (ideally, in the same git commit so that the code and the test case are treated as one unit).
The unit tests use the unittest module together with nose. See the examples in the MDAnalysisTests package.
The SciPy testing guidelines are a good howto for writing test cases, especially as we are directly using this framework (imported from numpy
).
Conventions for MDAnalysis
- Relative import statements are now banned from unit testing modules (see Issue #189 for details)
- Test input data is stored in MDAnalysisTests/data.
- Keep files small if possible; for trajectories 10 frames or less are sufficient.
- Add the file name of test data files to MDAnalysisTests/datafiles.py (see the code for details).
- Add the file(s) or a glob pattern to the
package_data
in setup.py; otherwise the file will not be included in the python package. - If you use data from a published paper then add a reference to this wiki page and the doc string in MDAnalysisTests/__init__.py.
- Tests are currently organized by top-level module. Each file containing tests must start with
test_
by convention (this is how nose/unittest works). Tests itself also have to follow the appropriate naming conventions. See the docs above or the source. - Tests that take longer than 3 seconds to run should be marked
@slow
(see e.g. the XTC tests in MDAnalysisTests/test_coordinates.py. They will only be run iflabels="full"
is given as an argument to thetest()
function. - Add a test for
- new functionality
- fixed issues (typically named
test_IssueXX
or referencing the issue in the doc string (to avoid regression) - anything you think worthwhile – the more the better!
The way we organized the unit tests changed between releases. The procedure for the current release is detailed at the very top of the page. The following list is for historical reference and in case you ever want to go back to a previous release.
- since 0.11.0: the testing subsystem was overhauled to allow the use of plugins external to nose. We also no longer use numpy's
test()
wrapper.mda_nosetests
is now the preferred way to run the tests from the command-line in a mostly backward-compatible way with the usage ofnosetests
. Most numpy-specific arguments totest()
are now deprecated in favor of nose flags. - since 0.7.5: tests and data are together in package MDAnalysisTests. See Issue 87 for details.
- release 0.7.4: tests are in MDAnalysis and data is in MDAnalysisTestData (for MDAnalysis == 0.7.4). To install MDAnalysisTestData download the
MDAnalysisTestData-0.7.4.tar.gz
from the Download section or tryeasy_install http://mdanalysis.googlecode.com/files/MDAnalysisTestData-0.7.4.tar.gz
- release 0.6.1 to 0.7.3: tests and data were included with MDAnalysis
- release 0.4 to 0.6.0: no tests included