Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable automated testing for Public Dashboard #171

Open
iantei opened this issue Oct 17, 2024 · 5 comments
Open

Enable automated testing for Public Dashboard #171

iantei opened this issue Oct 17, 2024 · 5 comments

Comments

@iantei
Copy link
Contributor

iantei commented Oct 17, 2024

Currently, we are testing the changes for the Public Dashboard in the following way:

  1. Load the dataset into MongoDB using load_mongodump.sh script.
  2. Execute the docker-compose:
docker-compose -f docker-compose.yml build
docker-compose -f docker-compose.yml up
  1. Once the docker containers are up, we enter into the docker container for server
docker exec -it em-public-dashboard-notebook-server-1 /bin/bash
source setup/activate.sh && cd saved-notebooks
  1. Then execute the generate_plots.py scripts to pass notebooks as the args
python bin/generate_plots.py generic_metrics.ipynb default
python bin/generate_plots.py generic_metrics_sensed.ipynb default
...
  1. Then launch the http://localhost:3274 and test the changes in it.
@iantei
Copy link
Contributor Author

iantei commented Oct 18, 2024

Explored into other repository, primarily, e-mission-server, to see how the testing was executed automatically.

  • Use of Github Actions

Following approach has been used for the setup of automated testing:

  • Starts with the .github/workflows/test_with_docker.yml
  • This calls up for run: docker compose -f setup/docker-compose.tests.yml up --exit-code-from web-server
  • The docker-compose.tests.yml has the following configuration:
services:
  web-server:
  #builds from tests/dockerfile
    build: tests
  • Docker file has the following:
COPY start_script.sh /start_script.sh

CMD ["/bin/bash", "/start_script.sh"]

This runs: start_script.sh which has the following:

source setup/setup_tests.sh
source setup/activate_tests.sh
./runAllTests.sh

runAllTests.sh has the following script:

PYTHONPATH=. python -m unittest discover -s emission/tests -p Test*;

Finds all the unites files which start with Test, and executes the test.

@iantei
Copy link
Contributor Author

iantei commented Oct 18, 2024

e-mission-server makes use of GitHub Actions and unittest - unit testing framework, to execute test cases for the e-mission-server automatically.

@iantei
Copy link
Contributor Author

iantei commented Oct 18, 2024

For em-public-dashboard, we have extensive use of .ipynb python Jupyter notebooks, along with few python .py files.

  • We should write unit test cases too, but I am tempted to explore if unittest is the right framework to use when we have to test for Jupyter notebooks.
  • As I do not have prior experience with unit testing in Python, I would need to explore and evaluate few options in details, though in higher level I came across a few other testing frameworks like testbook

@iantei
Copy link
Contributor Author

iantei commented Oct 18, 2024

While following the above mentioned steps in e-mission-server enables unit testing the code of em-public-dashboard; I am also considering the aspect of how can we automate this process :

Accounting both of the cases of automating the current process in more linear way, and utilizing the automation unit testing process alike in server side, I would like to propose two different tasks at a very high level to execute this task of "Enable automated testing for Public Dashboard":

  • Write a script to call load_mongodump.sh, launch the docker containers, execute all the generate_plots.py <notebook_names> from a single script and log the success/failure of each notebooks. Developers can test for different program/study configuration easily. For the cases, for example, a mode is split into two, or merged into Other as such, with the code changes, I am not sure how we would be able to test it by comparing images automatically - it might still need to be evaluated manually.
  • For the evaluation of alt_text and plots for each charts (in most cases), I think we could write unit test cases which evaluates the generation of these against the expected results (I still need to explore how to generate this part more). Choosing a right testing framework which helps us work more intuitively with Jupyter notebooks would be ideal.

@shankari @JGreenlee Please let me know about your thoughts on this.

@iantei iantei moved this to Questions for Shankari in OpenPATH Tasks Overview Oct 18, 2024
@JGreenlee
Copy link
Contributor

JGreenlee commented Oct 22, 2024

I think the first step here is to just write unit tests for the functions in scaffolding.py / plots.py

I do think pytest seems like a good idea because it can be used to run the unit tests and it can also test individual notebooks. It's also simpler and cleaner to write tests, and anecdotally it seems preferred as the "modern" Python unit testing library

Another note: e-mission-common uses pytest and I've had good luck with it. It's backwards compatible with unittest so you could even have a mix of unittest test files and pytest test files.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Questions for Shankari
Development

No branches or pull requests

2 participants