diff --git a/docs/concepts/testing/testing.rst b/docs/concepts/testing/testing.rst index 9d448afd5bdc..70ef1115d284 100644 --- a/docs/concepts/testing/testing.rst +++ b/docs/concepts/testing/testing.rst @@ -1,4 +1,3 @@ -####### Testing ####### @@ -7,7 +6,7 @@ Testing :depth: 3 Overview -======== +******** We maintain two kinds of tests: unit tests and integration tests. @@ -26,10 +25,10 @@ tests. Most of our tests are unit tests or integration tests. Test Types ----------- +========== Unit Tests -~~~~~~~~~~ +---------- - Each test case should be concise: setup, execute, check, and teardown. If you find yourself writing tests with many steps, @@ -38,18 +37,18 @@ Unit Tests - As a rule of thumb, your unit tests should cover every code branch. -- Mock or patch external dependencies. We use the voidspace `Mock Library`_. +- Mock or patch external dependencies using `unittest.mock`_ functions. - We unit test Python code (using `unittest`_) and Javascript (using `Jasmine`_) -.. _Mock Library: http://www.voidspace.org.uk/python/mock/ +.. _unittest.mock: https://docs.python.org/3/library/unittest.mock.html .. _unittest: http://docs.python.org/2/library/unittest.html .. _Jasmine: http://jasmine.github.io/ Integration Tests -~~~~~~~~~~~~~~~~~ +----------------- - Test several units at the same time. Note that you can still mock or patch dependencies that are not under test! For example, you might test that @@ -67,7 +66,7 @@ Integration Tests .. _Django test client: https://docs.djangoproject.com/en/dev/topics/testing/overview/ Test Locations --------------- +============== - Python unit and integration tests: Located in subpackages called ``tests``. For example, the tests for the ``capa`` package are @@ -80,14 +79,29 @@ Test Locations the test for ``src/views/module.js`` should be written in ``spec/views/module_spec.js``. -Running Tests -============= +Factories +========= -**Unless otherwise mentioned, all the following commands should be run from inside the lms docker container.** +Many tests delegate set-up to a "factory" class. For example, there are +factories for creating courses, problems, and users. This encapsulates +set-up logic from tests. +Factories are often implemented using `FactoryBoy`_. + +In general, factories should be located close to the code they use. For +example, the factory for creating problem XML definitions is located in +``xmodule/capa/tests/response_xml_factory.py`` because the +``capa`` package handles problem XML. + +.. _FactoryBoy: https://readthedocs.org/projects/factoryboy/ Running Python Unit tests -------------------------- +************************* + +The following commands need to be run within a Python environment in +which requirements/edx/testing.txt has been installed. If you are using a +Docker-based Open edX distribution, then you probably will want to run these +commands within the LMS and/or CMS Docker containers. We use `pytest`_ to run Python tests. Pytest is a testing framework for python and should be your goto for local Python unit testing. @@ -97,16 +111,16 @@ Pytest (and all of the plugins we use with it) has a lot of options. Use `pytest Running Python Test Subsets -~~~~~~~~~~~~~~~~~~~~~~~~~~~ +=========================== When developing tests, it is often helpful to be able to really just run one single test without the overhead of PIP installs, UX builds, etc. Various ways to run tests using pytest:: - pytest path/test_m­odule.py # Run all tests in a module. - pytest path/test_m­odule.p­y:­:te­st_func # Run a specific test within a module. - pytest path/test_m­odule.p­y:­:Te­stC­las­s # Run all tests in a class - pytest path/test_m­odule.p­y:­:Te­stC­las­s::­tes­t_m­ethod # Run a specific method of a class. + pytest path/test_module.py # Run all tests in a module. + pytest path/test_module.py::test_func # Run a specific test within a module. + pytest path/test_module.py::TestClass # Run all tests in a class + pytest path/test_module.py::TestClass::test_method # Run a specific method of a class. pytest path/testing/ # Run all tests in a directory. For example, this command runs a single python unit test file:: @@ -114,7 +128,7 @@ For example, this command runs a single python unit test file:: pytest xmodule/tests/test_stringify.py Note - -edx-platorm has multiple services (lms, cms) in it. The environment for each service is different enough that we run some tests in both environments in Github Actions. +edx-platorm has multiple services (lms, cms) in it. The environment for each service is different enough that we run some tests in both environments in Github Actions. To test in each of these environments (especially for tests in "common" and "xmodule" directories), you will need to test in each seperately. To specify that the tests are run with the relevant service as root, Add --rootdir flag at end of your pytest call and specify the env to test in:: @@ -139,7 +153,7 @@ Various tools like ddt create tests with very complex names, rather than figurin pytest xmodule/tests/test_stringify.py --collectonly Testing with migrations -*********************** +======================= For the sake of speed, by default the python unit test database tables are created directly from apps' models. If you want to run the tests @@ -149,7 +163,7 @@ against a database created by applying the migrations instead, use the pytest test --create-db --migrations Debugging a test -~~~~~~~~~~~~~~~~ +================ There are various ways to debug tests in Python and more specifically with pytest: @@ -173,7 +187,7 @@ There are various ways to debug tests in Python and more specifically with pytes How to output coverage locally -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +============================== These are examples of how to run a single test and get coverage:: @@ -221,233 +235,82 @@ run one of these commands:: Debugging Unittest Flakiness -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -As we move over to running our unittests with Jenkins Pipelines and pytest-xdist, -there are new ways for tests to flake, which can sometimes be difficult to debug. -If you run into flakiness, check (and feel free to contribute to) this -`confluence document `__ for help. - -Running Javascript Unit Tests ------------------------------ - -Before running Javascript unit tests, you will need to be running Firefox or Chrome in a place visible to edx-platform. If running this in devstack, you can run ``make dev.up.firefox`` or ``make dev.up.chrome``. Firefox is the default browser for the tests, so if you decide to use Chrome, you will need to prefix the test command with ``SELENIUM_BROWSER=chrome SELENIUM_HOST=edx.devstack.chrome`` (if using devstack). - -We use Jasmine to run JavaScript unit tests. To run all the JavaScript -tests:: - - paver test_js - -To run a specific set of JavaScript tests and print the results to the -console, run these commands:: - - paver test_js_run -s lms - paver test_js_run -s cms - paver test_js_run -s cms-squire - paver test_js_run -s xmodule - paver test_js_run -s xmodule-webpack - paver test_js_run -s common - paver test_js_run -s common-requirejs - -To run JavaScript tests in a browser, run these commands:: - - paver test_js_dev -s lms - paver test_js_dev -s cms - paver test_js_dev -s cms-squire - paver test_js_dev -s xmodule - paver test_js_dev -s xmodule-webpack - paver test_js_dev -s common - paver test_js_dev -s common-requirejs - -Debugging Specific Javascript Tests -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The best way to debug individual tests is to run the test suite in the browser and -use your browser's Javascript debugger. The debug page will allow you to select -an individual test and only view the results of that test. - - -Debugging Tests in a Browser -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To debug these tests on devstack in a local browser: - -* first run the appropriate test_js_dev command from above -* open http://localhost:19876/debug.html in your host system's browser of choice -* this will run all the tests and show you the results including details of any failures -* you can click on an individually failing test and/or suite to re-run it by itself -* you can now use the browser's developer tools to debug as you would any other JavaScript code - -Note: the port is also output to the console that you ran the tests from if you find that easier. - -These paver commands call through to Karma. For more -info, see `karma-runner.github.io `__. - -Testing internationalization with dummy translations ----------------------------------------------------- - -Any text you add to the platform should be internationalized. To generate translations for your new strings, run the following command:: - - paver i18n_dummy - -This command generates dummy translations for each dummy language in the -platform and puts the dummy strings in the appropriate language files. -You can then preview the dummy languages on your local machine and also in your sandbox, if and when you create one. - -The dummy language files that are generated during this process can be -found in the following locations:: - - conf/locale/{LANG_CODE} - -There are a few JavaScript files that are generated from this process. You can find those in the following locations:: - - lms/static/js/i18n/{LANG_CODE} - cms/static/js/i18n/{LANG_CODE} - -Do not commit the ``.po``, ``.mo``, ``.js`` files that are generated -in the above locations during the dummy translation process! +============================ -Test Coverage and Quality -------------------------- +See this `confluence document `_. -Viewing Test Coverage -~~~~~~~~~~~~~~~~~~~~~ +Running JavaScript Unit Tests +***************************** -We currently collect test coverage information for Python -unit/integration tests. +Before running Javascript unit tests, you will need to be running Firefox or Chrome in a place visible to edx-platform. +If you are using Tutor Dev to run edx-platform, then you can do so by installing and enabling the +``test-legacy-js`` plugin from `openedx-tutor-plugins`_, and then rebuilding +the ``openedx-dev`` image:: -To view test coverage: + tutor plugins install https://github.com/openedx/openedx-tutor-plugins/tree/main/plugins/tutor-contrib-test-legacy-js + tutor plugins enable test-legacy-js + tutor images build openedx-dev -1. Run the test suite with this command:: +.. _openedx-tutor-plugins: https://github.com/openedx/openedx-tutor-plugins/ - paver test +We use Jasmine (via Karma) to run most JavaScript unit tests. We use Jest to +run a small handful of additional JS unit tests. You can use the ``npm run +test*`` commands to run them:: -2. Generate reports with this command:: + npm run test-karma # Run all Jasmine+Karma tests. + npm run test-jest # Run all Jest tests. + npm run test # Run both of the above. - paver coverage +The Karma tests are further broken down into three types depending on how the +JavaScript it is testing is built:: -3. Reports are located in the ``reports`` folder. The command generates - HTML and XML (Cobertura format) reports. + npm run test-karma-vanilla # Our very oldest JS, which doesn't even use RequireJS + npm run test-karma-require # Old JS that uses RequireJS + npm run test-karma-webpack # Slightly "newer" JS which is built with Webpack -Python Code Style Quality -~~~~~~~~~~~~~~~~~~~~~~~~~ +Unfortunately, at the time of writing, the build for the ``test-karma-webpack`` +tests is broken. The tests are excluded from ``npm run test-karma`` as to not +fail CI. We `may fix this one day`_. -To view Python code style quality (including PEP 8 and pylint violations) run this command:: +.. _may fix this one day: https://github.com/openedx/edx-platform/issues/35956 - paver run_quality +To run all Karma+Jasmine tests for a particular top-level edx-platform folder, +you can run:: -More specific options are below. + npm run test-cms + npm run test-lms + npm run test-xmodule + npm run test-common -- These commands run a particular quality report:: - - paver run_pep8 - paver run_pylint - -- This command runs a report, and sets it to fail if it exceeds a given number - of violations:: - - paver run_pep8 --limit=800 - -- The ``run_quality`` uses the underlying diff-quality tool (which is packaged - with `diff-cover`_). With that, the command can be set to fail if a certain - diff threshold is not met. For example, to cause the process to fail if - quality expectations are less than 100% when compared to master (or in other - words, if style quality is worse than what is already on master):: - - paver run_quality --percentage=100 - -- Note that 'fixme' violations are not counted with run\_quality. To - see all 'TODO' lines, use this command:: - - paver find_fixme --system=lms - - ``system`` is an optional argument here. It defaults to - ``cms,lms,common``. - -.. _diff-cover: https://github.com/Bachmann1234/diff-cover - - -JavaScript Code Style Quality -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To view JavaScript code style quality run this command:: - - paver run_eslint - -- This command also comes with a ``--limit`` switch, this is an example of that switch:: - - paver run_eslint --limit=50000 - - -Code Complexity Tools -===================== - -Tool(s) available for evaluating complexity of edx-platform code: - - -- `plato `__ for JavaScript code - complexity. Several options are available on the command line; see - documentation. Below, the following command will produce an HTML report in a - subdirectory called "jscomplexity":: - - plato -q -x common/static/js/vendor/ -t common -e .eslintrc.json -r -d jscomplexity common/static/js/ - -Other Testing Tips -================== - -Connecting to Browser ---------------------- - -If you want to see the browser being automated for JavaScript, -you can connect to the container running it via VNC. - -+------------------------+----------------------+ -| Browser | VNC connection | -+========================+======================+ -| Firefox (Default) | vnc://0.0.0.0:25900 | -+------------------------+----------------------+ -| Chrome (via Selenium) | vnc://0.0.0.0:15900 | -+------------------------+----------------------+ - -On macOS, enter the VNC connection string in Safari to connect via VNC. The VNC -passwords for both browsers are randomly generated and logged at container -startup, and can be found by running ``make vnc-passwords``. - -Most tests are run in Firefox by default. To use Chrome for tests that normally -use Firefox instead, prefix the test command with -``SELENIUM_BROWSER=chrome SELENIUM_HOST=edx.devstack.chrome`` - -Factories ---------- - -Many tests delegate set-up to a "factory" class. For example, there are -factories for creating courses, problems, and users. This encapsulates -set-up logic from tests. - -Factories are often implemented using `FactoryBoy`_. - -In general, factories should be located close to the code they use. For -example, the factory for creating problem XML definitions is located in -``xmodule/capa/tests/response_xml_factory.py`` because the -``capa`` package handles problem XML. - -.. _FactoryBoy: https://readthedocs.org/projects/factoryboy/ +Finally, if you want to pass any options to the underlying ``node`` invocation +for Karma+Jasmine tests, you can run one of these specific commands, and put +your arguments after the ``--`` separator:: -Running Tests on Paver Scripts ------------------------------- + npm run test-cms-vanilla -- --your --args --here + npm run test-cms-require -- --your --args --here + npm run test-cms-webpack -- --your --args --here + npm run test-lms-webpack -- --your --args --here + npm run test-xmodule-vanilla -- --your --args --here + npm run test-xmodule-webpack -- --your --args --here + npm run test-common-vanilla -- --your --args --here + npm run test-common-require -- --your --args --here -To run tests on the scripts that power the various Paver commands, use the following command:: - pytest pavelib +Code Quality +************ -Testing using queue servers ---------------------------- +We use several tools to analyze code quality. The full set of them is:: -When testing problems that use a queue server on AWS (e.g. -sandbox-xqueue.edx.org), you'll need to run your server on your public IP, like so:: + mypy $PATHS... + pycodestyle $PATHS... + pylint $PATHS... + lint-imports + scripts/verify-dunder-init.sh + make xsslint + make pii_check + make check_keywords + npm run lint - ./manage.py lms runserver 0.0.0.0:8000 +Where ``$PATHS...`` is a list of folders and files to analyze, or nothing if +you would like to analyze the entire codebase (which can take a while). -When you connect to the LMS, you need to use the public ip. Use -``ifconfig`` to figure out the number, and connect e.g. to -``http://18.3.4.5:8000/``