A collection of tests that aim to check for consistency across API versions
NB : To determine your version of python by default, run
foo@bar:~$ python --version
If the output is Python 2.x.x
, please adjust your commands accordingly :
python => python3
pip => pip3
# Navigate to test dir
foo@bar:~$ cd qa-backend-tests
# Create new virtual env
foo@bar:~$ virtualenv qa-ledger
foo@bar:~$ source qa-ledger/bin/activate
# Install requirements
foo@bar:~$ pip3 install -r requirements.txt
# Make the Robot libraries visible to python
foo@bar:~$ export PYTHONPATH="${PYTHONPATH}:{/path/to/your/directory}/qa-backend-tests/RobotLibraries"
-
Creating a directory for the test reports
cd qa-backend-tests && mkdir reports
-
Run the tests with the following command :
robot -T -d reports -n noncritical .
-T
- Short for--timestampoutputs
. Creates reports, logs, etc. with the current timestamp so we don't overwrite existing ones upon execution.-d
- Short for--outputdir
. Tells the framework where to create the report files.-n
- Short for--noncritical
. This tells Robot Framework what tag indicates a non-critical test (I've standardized onnoncritical
to reduce ambiguity).