From 1150a57397e5560a74b1e5e81f817e94884b5fdc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Micha=C5=82=20Godek?= Date: Sun, 10 Nov 2024 17:37:24 +0100 Subject: [PATCH] basic gathering of docs --- .github/workflows/docs.yml | 8 - docs/backend/for_developers.md | 248 ++++++ docs/backend/ghcr_packages.md | 38 + docs/backend/index.md | 12 +- docs/backend/mkdocs.md | 94 +++ docs/backend/openapi.yaml | 1290 +++++++++++++++++++++++++++++++ docs/backend/persistency.md | 349 +++++++++ docs/backend/states.md | 38 + docs/backend/swagger.md | 1 + docs/backend/using_docker.md | 51 ++ docs/converter/index.md | 5 + docs/converter/readme.md | 51 ++ docs/documentation/index.md | 96 +++ docs/frontend/authentication.md | 74 ++ docs/frontend/examples.md | 18 + docs/frontend/for_developers.md | 126 +++ docs/frontend/index.md | 10 +- docs/other/index.md | 1 - mkdocs.yml | 15 +- 19 files changed, 2512 insertions(+), 13 deletions(-) create mode 100644 docs/backend/for_developers.md create mode 100644 docs/backend/ghcr_packages.md create mode 100644 docs/backend/mkdocs.md create mode 100644 docs/backend/openapi.yaml create mode 100644 docs/backend/persistency.md create mode 100644 docs/backend/states.md create mode 100644 docs/backend/swagger.md create mode 100644 docs/backend/using_docker.md create mode 100644 docs/converter/index.md create mode 100644 docs/converter/readme.md create mode 100644 docs/documentation/index.md create mode 100644 docs/frontend/authentication.md create mode 100644 docs/frontend/examples.md create mode 100644 docs/frontend/for_developers.md delete mode 100644 docs/other/index.md diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml index d026e69..023b031 100644 --- a/.github/workflows/docs.yml +++ b/.github/workflows/docs.yml @@ -4,16 +4,8 @@ on: # Runs on pushes targeting the default branch push: branches: ["main"] - paths: - - 'docs/**' - - 'pyproject.toml' - - 'mkdocs.yml' pull_request: branches: [main] - paths: - - 'docs/**' - - 'pyproject.toml' - - 'mkdocs.yml' # Allows you to run this workflow manually from the Actions tab workflow_dispatch: diff --git a/docs/backend/for_developers.md b/docs/backend/for_developers.md new file mode 100644 index 0000000..17ffa65 --- /dev/null +++ b/docs/backend/for_developers.md @@ -0,0 +1,248 @@ +# For developers + +Project make use of poetry for dependency management. If you do not have it installed, check official [poetry installation guide](https://python-poetry.org/docs/). +Project is configured to create virtual environment for you, so you do not need to worry about it. +Virtual environment is created in `.venv` folder in the root of the project. + +## Installing dependencies + +To install all dependencies, run: + +```bash +poetry install +``` + +This will install all the dependencies including `test` and `docs` ones. +If you want to test app, you do not need `docs` dependencies, you can skip them by using: + +```bash +poetry install --without docs +``` + +If you want to install only main dependencies, you can use: + +```bash +poetry install --only main,test +``` +(There can't be space after comma in above command) + +## Building and running the app + +Application consists of multiple components. Following instruction will guide you through the process of set up and running the application. + +Here is a flowchart that shows the various dependencies between the different components of the application. + +```mermaid +flowchart LR + id0[Redis]-->id1[Celery simulation worker]-->id2[Flask app] + id2-->id0 +``` + +1. Download SHIELD-HIT12A simulator + + Currently, we store binaries of simulators on S3 filesystem. SHIELD-HIT12A (full version) and Fluka files are encrypted. + + To simply init download process we have to run following commands: + + === "Linux" + + ```bash + poetry run yaptide/admin/simulators.py download-shieldhit --dir bin + ``` + + === "Windows (PowerShell)" + + ```powershell + poetry run yaptide\admin\simulators.py download-shieldhit --dir bin + ``` + + To get full instruction of command usage we can type + + === "Linux" + + ```bash + poetry run yaptide/admin/simulators.py + ``` + + === "Windows (PowerShell)" + + ```powershell + poetry run yaptide\admin\simulators.py + ``` + + +2. Get the redis + If you already use it just start it on port `6379` + + If not good solution would comes with help of docker, run the following commands: + + ```bash + docker run --detach --publish 6379:6379 --name yaptide_redis redis:7-alpine + ``` + + To remove this container use: + + ```bash + docker rm -f yaptide_redis + ``` + +3. Run Celery simulation-worker + + You can reuse the same terminal, as for redis, as docker sends redis process to the background + + === "Linux" + + ```bash + PATH=$PATH:bin BACKEND_INTERNAL_URL=http://127.0.0.1:5000 CELERY_BROKER_URL=redis://127.0.0.1:6379/0 CELERY_RESULT_BACKEND=redis://127.0.0.1:6379/0 poetry run celery --app yaptide.celery.simulation_worker worker --events -P eventlet --hostname yaptide-simulation-worker --queues simulations --loglevel=debug + ``` + + === "Windows (PowerShell)" + + ```powershell + $Env:PATH += ";" + (Join-Path -Path (Get-Location) -ChildPath "bin"); $env:BACKEND_INTERNAL_URL="http://127.0.0.1:5000"; $env:CELERY_BROKER_URL="redis://127.0.0.1:6379/0"; $env:CELERY_RESULT_BACKEND="redis://127.0.0.1:6379/0"; poetry run celery --app yaptide.celery.simulation_worker worker --events -P eventlet --hostname yaptide-simulation-worker --queues simulations --loglevel=debug + ``` + + +4. Run Celery helper-worker + + === "Linux" + + ```bash + PATH=$PATH:bin FLASK_SQLALCHEMY_DATABASE_URI=sqlite:///db.sqlite BACKEND_INTERNAL_URL=http://127.0.0.1:5000 CELERY_BROKER_URL=redis://127.0.0.1:6379/0 CELERY_RESULT_BACKEND=redis://127.0.0.1:6379/0 poetry run celery --app yaptide.utils.helper_worker worker --events --hostname yaptide-helper-worker --queues helper --loglevel=debug + ``` + + === "Windows (PowerShell)" + ```powershell + $Env:PATH += ";" + (Join-Path -Path (Get-Location) -ChildPath "bin"); $env:FLASK_SQLALCHEMY_DATABASE_URI="sqlite:///db.sqlite"; $env:BACKEND_INTERNAL_URL="http://127.0.0.1:5000"; $env:CELERY_BROKER_URL="redis://127.0.0.1:6379/0"; $env:CELERY_RESULT_BACKEND="redis://127.0.0.1:6379/0"; poetry run celery --app yaptide.utils.helper_worker worker --events --hostname yaptide-helper-worker --queues helper --loglevel=debug + ``` + + +5. Run the app + + === "Linux" + + ```bash + FLASK_USE_CORS=True FLASK_SQLALCHEMY_DATABASE_URI="sqlite:///db.sqlite" CELERY_BROKER_URL=redis://127.0.0.1:6379/0 CELERY_RESULT_BACKEND=redis://127.0.0.1:6379/0 poetry run flask --app yaptide.application run + ``` + + === "Windows (PowerShell)" + + ```powershell + $env:FLASK_USE_CORS="True"; $env:FLASK_SQLALCHEMY_DATABASE_URI="sqlite:///db.sqlite"; $env:CELERY_BROKER_URL="redis://127.0.0.1:6379/0"; $env:CELERY_RESULT_BACKEND="redis://127.0.0.1:6379/0"; poetry run flask --app yaptide.application run + ``` + + + This command will create `db.sqlite` inside `./instance` folder. This is [default Flask behavior](https://flask.palletsprojects.com/en/3.0.x/config/#instance-folders). + + To get more debugging information you can also force SQLALCHEMY to use `echo` mode by setting `SQLALCHEMY_ECHO` environment variable to `True`. + + === "Linux" + + ```bash + FLASK_SQLALCHEMY_ECHO=True FLASK_USE_CORS=True FLASK_SQLALCHEMY_DATABASE_URI="sqlite:///db.sqlite" CELERY_BROKER_URL=redis://127.0.0.1:6379/0 CELERY_RESULT_BACKEND=redis://127.0.0.1:6379/0 poetry run flask --app yaptide.application run + ``` + + === "Windows (PowerShell)" + + ```powershell + $env:FLASK_SQLALCHEMY_ECHO="True"; $env:FLASK_USE_CORS="True"; $env:FLASK_SQLALCHEMY_DATABASE_URI="sqlite:///db.sqlite"; $env:CELERY_BROKER_URL="redis://127.0.0.1:6379/0"; $env:CELERY_RESULT_BACKEND="redis://127.0.0.1:6379/0"; poetry run flask --app yaptide.application run + ``` + + To include debugging messages from flask, add `--debug` option to the command. + + While running backend and frontend, developer may encounter [Cross-Origin Resource Sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) errors in web browser's console that prevent communication to the server. To resolve these CORS issues, one should set FLASK_USE_CORS=True in the `.env` file (notice that it's already included in above command). Also pay attention if your frontend runs on ```http://127.0.0.1:3000``` or ```http://localhost:3000```, because right now cors_config in application.py specifies these URLs. + + +## Database + +To add user, run: + +=== "Linux" + + ```bash + FLASK_SQLALCHEMY_DATABASE_URI="sqlite:///instance/db.sqlite" poetry run yaptide/admin/db_manage.py add-user admin --password password + ``` + +=== "Windows (PowerShell)" + + ```powershell + $env:FLASK_SQLALCHEMY_DATABASE_URI="sqlite:///instance/db.sqlite"; poetry run yaptide\admin\db_manage.py add-user admin --password password + ``` + +You can use the following command, to get more information: + +=== "Linux" + + ```bash + FLASK_SQLALCHEMY_DATABASE_URI="sqlite:///instance/db.sqlite" poetry run yaptide/admin/db_manage.py --help + ``` + +=== "Windows (PowerShell)" + + ```powershell + $env:FLASK_SQLALCHEMY_DATABASE_URI="sqlite:///instance/db.sqlite"; poetry run yaptide\admin\db_manage.py --help + ``` + +## Testing + +To run tests use: + +=== "Linux" + + ```shell + poetry run pytest + ``` + +=== "Windows (PowerShell)" + On Windows you need to run tests one by one: + + ```shell + Get-ChildItem -Path "tests" -Filter "test_*.py" -Recurse | foreach { poetry run pytest $_.FullName } + ``` + + +## Development + +To maintain code quality, we use yapf. +To avoid running it manually we strongly recommend to use pre-commit hooks. To install it run: + +```shell +poetry run pre-commit install +``` + +### Pre-commit Use Cases + +- **Commit Changes**: Commit your changes using `git commit` in terminal or using `GUI Git client` in your IDE. + +### Case 1: All Hooks Pass Successfully + +- **Pre-commit Hooks Run**: Before the commit is finalized, pre-commit will automatically run all configured hooks. If all hooks pass without any issues, the commit proceeds as usual. + +### Case 2: Some Hooks Fail + +- **Pre-commit Hooks Run**: Before the commit is finalized, pre-commit will automatically run all configured hooks. If one or more hooks fail, pre-commit will abort the commit process. + + - **terminal** - all issues will be listed in terminal with `Failed` flag + - **VS Code** - you will get error popup, click on `show command output` alle issues will be presented in the same way as they would appear in the terminal. + +- **Fix Issues**: Address the issues reported by the failed hooks. Some hooks automatically format code so you don't have to change anything. Once the issues are fixed, commit once more. + +### YAPF + +Out main use of pre-comit is yapf which is Python code formatter that automatically formats Python code according to predefined style guidelines. We can specify styles for yapf in `[tool/yapf]` section of `pyproject.toml` file. The goal of using yapf is to always produce code that is following the chosen style guidelines. + +### Running pre-commit manually + +To manually run all pre-commit hooks on repository use: +```shell +pre-commit run --all-files +``` +If you wnat to run specific hook use: +```shell +pre-commit run +``` + Each `hook_id` tag is specified in `.pre-commit-config.yaml` file. It is recommended to use these commands after adding new hook to your config in order to check already existing files. + +### Custom hooks + +Pre-commit allows creating custom hooks by writing script in preffered language which is supported by pre-commit and adding it to `.pre-commit-config.yaml`. In yaptide we use custom hook which checks for not empty env files. This hook prevents user from commiting and pushing to repository secrets such as passwords. diff --git a/docs/backend/ghcr_packages.md b/docs/backend/ghcr_packages.md new file mode 100644 index 0000000..db2d5b3 --- /dev/null +++ b/docs/backend/ghcr_packages.md @@ -0,0 +1,38 @@ +# Docker images on GHCR + +GitHub Container Registry is an organisation-scoped place where Docker containers can be stored and then pulled from freely in GitHub Actions and solutions like gitpod.io or GitHub Codespaces. Yaptide's packages are private and can be accessed only by the organisation members. + +## Deployment + +Docker images for backend (Flask), simulation worker can be automatically built and deployed to ghcr.io registry. Building and deployment are handled by GitHub Actions. There are two methods: + +- automatic action triggered after every commit to the master, +- on-demand action triggered by `/deploy-flask` or `/deploy-simulation-worker` comment, typed by user in the Pull Request discussion. + +Images from master provide a way to quickly deploy stable version of backend part of the yaptide platform. Images from pull request allows for fast way of testing new features proposed in the PR. + +## Usage + +All available packages are shown in the [Packages](https://github.com/orgs/yaptide/packages) section of the yaptide organisation in GitHub. Newest master branch image is available with tag `master`. For pull requests it is a PR number prefixed with `pr-`, e.g. `pr-17`. Corresponding docker pull command can be read after clicking on the package. For this case it would be: +```bash +docker pull ghcr.io/yaptide/yaptide-flask:pr-17 +``` + +Deployed packages can be accessed from gitpod.io or GitHub Codespaces to easily run and test them in pull requests or on master branch. To pull the image in gitpod.io it might be requested to log in to ghcr.io via Docker using GitHub credentials: +```bash +docker login ghcr.io --username +``` +Then it is allowed to pull the images. In GitHub Codespaces the above comand is not required. + +## Retention policies + +GitHub Container Registry doesn't provide any retention mechanisms. It is required to use external solutions and define own GitHub Actions for this purpose. Both flask and worker images are automatically cleaned up in the registry based on the custom retention policies defined in `cleanup-closed-pr-packages` and `packages-retention` actions: + +- Outdated master's packages are removed if they are older than 1 month. +- Pull request's newest packages are removed when it is merged or closed. +- Outdated pull requests' packages are removed if they are older than 2 weeks. +- Latest pull requests' packages are removed if they are older than 2 months. + +It is also possible to run the latter two policies manually by dispatching the `packages-retention` GitHub action. Normally it is dispatched using cron job every Monday at 04:30 AM. + +To delete the packages from ghcr.io registry, it is required to use the PAT token created by organisation and repository admin with `read:packages` and `delete:packages` permissions. It should be placed in the organisation's secrets. It is not possible to use other kind of tokens, e.g. action scoped `GITHUB_TOKEN` or fine-grained token. diff --git a/docs/backend/index.md b/docs/backend/index.md index 797fb69..ae78464 100644 --- a/docs/backend/index.md +++ b/docs/backend/index.md @@ -1,7 +1,15 @@ -# YAPTIDE +# Yaptide (backend) + +Github link: [https://github.com/yaptide/yaptide](https://github.com/yaptide/yaptide) Developer documentation of the yaptide project The documenation contains: - * main [documentation for developers](index.md) + * [For developers](for_developers.md) - How to build backend for developers + * [Using docker](using_docker.md) - How to build backend for deployment using Docker + * [API reference](swagger.md) - (useful for frontend development), auto-generated from swagger yaml + * [Jobs and tasks](states.md) - Description of states of jobs and task + * [Persistent storage](persistency.md) - Description of database model + * [Docker images on GHcR](ghcr_packages.md) - GitHub Container Registry. Deployed to ghcr.io registry. + diff --git a/docs/backend/mkdocs.md b/docs/backend/mkdocs.md new file mode 100644 index 0000000..19d4d8c --- /dev/null +++ b/docs/backend/mkdocs.md @@ -0,0 +1,94 @@ +# Developer documentation + +The documentation indended for developes is located in the `docs` folder. +We use [mkdocs](https://www.mkdocs.org) with [material for mkdocs](https://squidfunk.github.io/mkdocs-material/) customisation to generate the documentation in the HTML format. + +## Documentation structure + +### Technical documentation + +Technical documentation is written in markdown format and can be found in the [docs folder](https://github.com/yaptide/yaptide/tree/master/docs). + +### API reference + +The [API reference](swagger.md) is generated from the [swagger](https://swagger.io) yaml file. +The [swagger.yaml](https://github.com/yaptide/yaptide/blob/master/yaptide/static/openapi.yaml) file is located in the [yaptide/static](https://github.com/yaptide/yaptide/tree/master/yaptide/static) folder. This is the location from which Flask serve it when the backend is deployed. + +The HTML API documentation is rendered using [render_swagger](https://github.com/bharel/mkdocs-render-swagger-plugin) mkdocs plugin installed as [mkdocs-render-swagger-plugin](https://pypi.org/project/mkdocs-render-swagger-plugin/) pip package. +Its a bit abandoned project but it seems to be the only solution to generate static HTML from swagger yaml file. +The swagger documenation can be viewed locally by deploying backend and connecting to the backend server via `/api/docs` endpoint. +By using the `mkdocs-render-swagger-plugin` we can serve the documenation statically on github pages. +This way users may read the documenation without deploying the backend. + +The `mkdocs-render-swagger-plugin` expects the swagger yaml file to be located in the [docs folder](https://github.com/yaptide/yaptide/tree/master/docs). Therefore we modified the [docs/gen_ref_pages.py](https://github.com/yaptide/yaptide/blob/master/docs/gen_ref_pages.py) script to copy the swagger yaml file from the Flask static directory to the docs folder. The copy happens whenever the `mkdocs build` or `mkdocs serve` command is run. + +### Code reference + +The code reference is generated using [mkdocs-gen-files](https://github.com/oprypin/mkdocs-gen-files) mkdocs plugin. +We have a [docs/gen_ref_pages.py](https://github.com/yaptide/yaptide/blob/master/docs/gen_ref_pages.py) scripts that crawls through all Python files in the [yaptide folder](https://github.com/yaptide/yaptide/tree/master/yaptide) directory. Then it generates on-the-fly a markdown documentation from docstrings for each module, class and function. Also a on-the-fly `reference/SUMMARY.md` file is generated using [mkdocs-literate-nav](https://github.com/oprypin/mkdocs-literate-nav) mkdocs plugin. This file serves as left-side menu for the code reference. + +### Tests coverage + +The tests coverage is generated using [mkdocs-coverage](https://github.com/pawamoy/mkdocs-coverage) mkdocs plugin. This plugin expects a pytest coverage report in the `htmlcov` directory. + +## Github Pages deployment of the documentation + +Github pages deployment is done using [GitHub Actions docs workflow](https://github.com/yaptide/yaptide/blob/master/.github/workflows/docs.yml). +It deploys new version of the documentation whenever a new commit is pushed to the `master` branch. +The deployment includes generation of test coverage report and API reference documentation. + +## Local deployment of the documentation + +### Prerequisites + +First, user needs to install [poetry](https://python-poetry.org). +Then, user needs to install the dependencies for the backend and the documentation: + +```bash +poetry install --only main,docs +``` + +### Building the documentation + +To build the documentation run the following command: + +```bash +poetry run mkdocs build +``` + +this will generate the documentation in the `site` folder. + +To serve the documentation locally run the following command: + +```bash +poetry run mkdocs serve +``` + +This will start a local webserver on port 8000. The documentation can be viewed by opening the following url in a browser: http://localhost:8000 + +### Working with the technical documentation + +After modification of the markdown file the documenation served via `mkdocs serve` command will be updated automatically. + +### Working with the API reference + +After modification of the swagger yaml one needs to stop the `mkdocs serve` command and run it again. This is required as to re-generate the API reference documentation mkdocs needs to copy the swagger yaml file from the Flask static directory to the docs folder. +Please avoid modification and commiting of the swagger yaml file in the docs folder as it will be overwritten by the `mkdocs serve` command. + +### Working with the code reference + +After modification of the Python code one needs to stop the `mkdocs serve` command and run it again. + +### Working with the tests coverage + +To regeneate tests coverage one needs to run the following command: + +```bash +poetry run pytest --cov-report html:htmlcov --cov=yaptide +``` + +Note that this requires installation of dependencies for the backend and the tests: + +```bash +poetry install --only main,test +``` diff --git a/docs/backend/openapi.yaml b/docs/backend/openapi.yaml new file mode 100644 index 0000000..e8da740 --- /dev/null +++ b/docs/backend/openapi.yaml @@ -0,0 +1,1290 @@ +openapi: 3.0.3 +info: + title: Yaptide Project Api Documentation + version: 1.0.0 + description: Yaptide Project Api Documentation +servers: + - url: http://localhost:5000 +paths: + /: + get: + security: + - basicAuth: [ ] + summary: Allows to check if server is alive + description: Allows to check if server is alive. If server is down it won't respond to this request + responses: + '200': + description: Successful operation + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + +# Authorisation Routes + /auth/register: + put: + summary: Allows registration of new users + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/RegisterLoginRequest' + responses: + '201': + description: Returns message - User created + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '400': + description: Bad Request - Missing payload or keys in payload + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User already exists + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + /auth/login: + post: + summary: Allows to login the user - server sets refresh and access tokens in cookies + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/RegisterLoginRequest' + responses: + '202': + description: Includes expiration time in milliseconds of new access tokens with message - Successfully logged in + content: + application/json: + schema: + type: object + properties: + message: + type: string + access_exp: + type: integer + refresh_exp: + type: integer + '400': + description: Bad Request - Missing payload or keys in payload + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Forbidden - Invalid login or password + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + /auth/refresh: + get: + summary: Allows to refresh access token - server sets new access token in cookies + responses: + '200': + description: Includes expiration time in milliseconds of new access token + content: + application/json: + schema: + type: object + properties: + message: + type: string + access_exp: + type: integer + '401': + description: Unauthorized - No token provided + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found or log in required + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + /auth/status: + get: + summary: Allows to retrieve logged in user information + responses: + '200': + description: Includes logged in user information, shown in example response below + content: + application/json: + schema: + type: object + properties: + message: + type: string + username: + type: string + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required or token refresh required + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + /auth/logout: + delete: + summary: Allows to logout the user - server removes access and refresh tokens from cookies. If nobody was logged in then just nothing happens + responses: + '200': + description: User was successfully logged out + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + +# Keycloak auth Routes + /auth/keycloak: + post: + summary: Allows to login the user with keycloak credentials - server sets refresh and access tokens in cookies + description: At current state this endpoint works only with server running on C3 + parameters: + - in: header + name: Authorization + schema: + type: string + required: true + description: keycloak token + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/KeycloakRequest' + responses: + '202': + description: Includes expiration time in milliseconds of new access token + content: + application/json: + schema: + type: object + properties: + message: + type: string + access_exp: + type: integer + '400': + description: Bad Request - Missing payload or keys in payload + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Forbidden - No token provided + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + +# User Routes + /user/simulations: + get: + summary: Allows for getting user's simulations + parameters: + - in: query + name: page_size + schema: + type: integer + description: Specifies the page size from range [0,100] - incorrect or + non provided value will result in sending page with size 10 + - in: query + name: page_idx + schema: + type: integer + description: Specifies the page index to be send - incorrect or non + provided value will result in sending page with index 0 + - in: query + name: order_by + schema: + type: string + enum: [start_time, end_time] + description: Specifies the parameter by which pages are sorted, available are start_time or end_time + - incorrect or non provided value will result in sending page sorted by start_time + - in: query + name: order_type + schema: + type: string + enum: [ascend, descend] + description: Specifies the order in which pages are sorted, available are ascend or descend + - incorrect or non provided value will result in sending page sorted by ascend + responses: + '200': + description: Includes list of user's simulations + content: + application/json: + schema: + type: object + properties: + message: + type: string + page_count: + type: integer + description: returns the number of available pages + simulations_count: + type: integer + description: returns the number of owned simulations + simulations: + description: is a list of simulations returned in requested page + type: array + items: + $ref: '#/components/schemas/Simulation' + + '400': + description: Bad Request - Missing parameters + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required or token refresh required + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + delete: + summary: Allows for deleting simulation. + parameters: + - in: query + name: job_id + schema: + type: string + responses: + '200': + description: Simulation was succesfully deleted + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '400': + description: Bad Request - Missing parameters + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - User does not have permission to delete this simulation + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - The simulation is currently running, needs to be canceled or completed before deletion + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '404': + description: Not Found - The simulation with the provided job_id does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + /user/clusters: + description: Returns available clusters + get: + summary: Allows for getting available clusters + responses: + '200': + description: Includes list of available clusters + content: + application/json: + schema: + type: object + properties: + message: + type: string + clusters: + type: array + items: + type: object + properties: + cluster_name: + type: string + '400': + description: Bad Request - Missing parameters + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required or token refresh required + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + +# Jobs Routes + /jobs: + get: + summary: Allows for getting job status from database. + description: Endpoint designed for getting periodical job status from database. + parameters: + - in: query + name: job_id + schema: + type: string + responses: + '200': + description: Includes status of the job + content: + application/json: + schema: + type: object + properties: + message: + type: string + job_state: + type: string + enum: [PENDING, RUNNING, COMPLETED, FAILED] + description: job state + job_task_status: + type: array + items: + $ref: '#/components/schemas/JobTaskStatus' + '400': + description: Bad Request - Missing parameters + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required, token refresh required or job with provided ID does not belong to the user + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '404': + description: Job with provided ID does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '408': + description: Timeout + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + post: + summary: For updating simulation object in database and uploading error logs + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/UpdateJobs' + responses: + '202': + description: Simulation updated + '501': + description: Error updating simulation + /jobs/direct: + post: + summary: Allows for submitting jobs to run directly on server. + requestBody: + required: true + content: + application/json: + schema: + allOf: + - $ref: '#/components/schemas/JobsRequest' + - $ref: '#/components/schemas/Input' + responses: + '202': + description: Includes job_id and additional information about submited job + content: + application/json: + schema: + type: object + properties: + job_id: + type: string + message: + type: string + '400': + description: Bad Request - Missing payload or keys in payload + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required or token refresh required + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + get: + summary: Allows for getting job status and additional job's info. Available only for jobs run directly. + description: At current stage of development it does not provide any addtional features than GET method from '/jobs' route - it does the same thing but it is available only for jobs run directly. + parameters: + - in: query + name: job_id + schema: + type: string + responses: + '200': + description: Includes status and additional information about the job + content: + application/json: + schema: + type: object + properties: + message: + type: string + job_state: + type: string + enum: [PENDING, RUNNING, COMPLETED, FAILED] + description: job state + job_task_status: + type: array + items: + $ref: '#/components/schemas/JobTaskStatus' + '400': + description: Bad Request - Missing parameters + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required, token refresh required or job with provided ID does not belong to the user + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '404': + description: Job with provided ID does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '408': + description: Timeout + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + delete: + summary: Allows for job cancelation. Available only for jobs run directly. + parameters: + - in: query + name: job_id + schema: + type: string + responses: + '200': + description: Returns information about main job and subtasks' cancelation + content: + application/json: + schema: + $ref: '#/components/schemas/JobsDirectDeleteResponse' + '400': + description: Bad Request - Missing parameters + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required, token refresh required or job with provided ID does not belong to the user + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '404': + description: Job with provided ID does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '408': + description: Timeout + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + /jobs/batch: + post: + summary: Allows for submitting jobs to run on SLURM. + description: This endpoint can be used only by keycloak authenticated users + requestBody: + required: true + content: + application/json: + schema: + allOf: + - $ref: '#/components/schemas/JobsRequest' + - $ref: '#/components/schemas/Input' + - $ref: '#/components/schemas/BatchOptions' + responses: + '202': + description: Includes job_id and additional information about submited job + content: + application/json: + schema: + type: object + properties: + job_id: + type: string + message: + type: string + sh_files: + type: object + description: contains sh files used by the job (additional info in response from '/jobs/batch') + properties: + submit: + type: string + description: submit is a file which prepares the environment and starts array and collect + array: + type: string + description: array is array job script which runs the simulation + collect: + type: string + description: collect is a script collecting results generated by simulation + submit_stdout: + type: string + description: submit_stdout is output generated by submit.sh (additional info in response from '/jobs/batch') + '400': + description: Bad Request - Missing payload or keys in payload + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required or token refresh required + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + get: + summary: Allows for getting job status and additional job's info. Available only for jobs run on SLURM. + description: This endpoint can be used only by keycloak authenticated users. It is recommended not to use it very often because of the high load it puts on the SLURM system. + parameters: + - in: query + name: job_id + schema: + type: string + responses: + '200': + description: Includes status and additional information about the job + content: + application/json: + schema: + type: object + properties: + message: + type: string + job_state: + type: string + enum: [PENDING, RUNNING, COMPLETED, FAILED] + description: job state + job_task_status: + type: array + items: + $ref: '#/components/schemas/JobTaskStatus' + sh_files: + type: object + description: contains sh files used by the job (additional info in response from '/jobs/batch') + properties: + submit: + type: string + description: submit is a file which prepares the environment and starts array and collect + array: + type: string + description: array is array job script which runs the simulation + collect: + type: string + description: collect is a script collecting results generated by simulation + submit_stdout: + type: string + description: submit_stdout is output generated by submit.sh (additional info in response from '/jobs/batch') + '400': + description: Bad Request - Missing parameters + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required, token refresh required or job with provided ID does not belong to the user + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '404': + description: Job with provided ID does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '408': + description: Timeout + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + delete: + summary: Allows for job cancelation. Available only for jobs run on SLURM. + description: This endpoint can be used only by keycloak authenticated users + parameters: + - in: query + name: job_id + schema: + type: string + responses: + '200': + description: Job was succesfully canceled + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '400': + description: Bad Request - Missing parameters + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required, token refresh required or job with provided ID does not belong to the user + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '404': + description: Job with provided ID does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '408': + description: Timeout + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + +# Inputs Route + /inputs: + get: + summary: Allows for retrieving input used to run simulation + parameters: + - in: query + name: job_id + schema: + type: string + responses: + '200': + description: Includes input used to run simulation + content: + application/json: + schema: + type: object + properties: + message: + type: string + input: + type: object + properties: + input_json: + type: object + description: available only if simulation was run with JSON input + input_files: + type: object + description: files used by simulation + input_type: + type: string + enum: [editor, files] + description: simulation input type + number_of_all_primaries: + type: integer + description: requested number of all primaries + '400': + description: Bad Request - Missing parameters + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required, token refresh required or job with provided ID does not belong to the user + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '404': + description: Job with provided ID does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + +# Result Routes + /results: + description: Now it works properly only for simulations run on direct. Fetching results from batch will always end up with getting response with 400 status code. + get: + summary: Allows getting results of the simulation + parameters: + - in: query + name: job_id + schema: + type: string + responses: + '200': + description: Includes results of the simulation + content: + application/json: + schema: + type: object + properties: + message: + type: string + estimators: + type: array + items: + $ref: '#/components/schemas/Estimator' + description: list of resulting estimators + '400': + description: Bad Request - Missing parameters + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required, token refresh required or job with provided ID does not belong to the user + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '404': + description: Job with provided ID does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + +# Task Routes + /tasks: + description: Route dedicated for Backend internal communication + post: + summary: Updates task state + description: Used by tasks to update their state + requestBody: + required: true + content: + application/json: + schema: + type: object + properties: + simulation_id: + type: integer + description: id of task's parent simulation + task_id: + type: string + description: id task to update + update_key: + type: string + description: authentication key provided to tasks + update_dict: + type: object + description: dict containing update data + required: + - simulation_id + - task_id + - update_key + - update_dict + responses: + default: + description: Response JSON for all codes + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + +# Logfiles Routes + /logfiles: + post: + summary: Allows for uploading logfiles of the simulation by its tasks + description: At current stage of development incoming logfiles override existing ones + requestBody: + required: true + content: + application/json: + schema: + type: object + properties: + simulation_id: + type: integer + description: id of task's parent simulation + update_key: + type: string + description: authentication key provided to tasks + logfiles: + type: object + description: dict containing log files + required: + - simulation_id + - update_key + - logfiles + responses: + default: + description: Response JSON for all codes + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + get: + summary: Allows getting logfiles of the simulation + parameters: + - in: query + name: job_id + schema: + type: string + responses: + '200': + description: Includes logfiles of the simulation + content: + application/json: + schema: + type: object + properties: + message: + type: string + logfiles: + type: object + description: dict of log files with names as keys and content as values. + '400': + description: Bad Request - Missing parameters + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '401': + description: Unauthorized - Invalid credentials + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '403': + description: Forbidden - User not found, log in required, token refresh required or job with provided ID does not belong to the user + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '404': + description: Job with provided ID does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + '500': + description: Internal Server Error + content: + application/json: + schema: + $ref: '#/components/schemas/BasicResponse' + +# Components +components: + securitySchemes: + basicAuth: # <-- arbitrary name for the security scheme + type: http + scheme: basic + schemas: + BasicResponse: + type: object + properties: + message: + type: string + description: body message + KeycloakRequest: + type: object + properties: + username: + type: string + required: + - username + RegisterLoginRequest: + type: object + properties: + username: + type: string + password: + type: string + required: + - username + - password + Metadata: + type: object + description: static additional information + properties: + platform: + type: string + description: specifies platform on which simulation is running, can be DIRECT or BATCH + server: + type: string + description: specifies platform on which simulation is running, can be DIRECT or BATCH + input_type: + type: string + description: specifies input which was used to run simulation, can be YAPTIDE_PROJECT or INPUT_FILES + sim_time: + type: string + description: specifies simulator which was used to run simulation, can be SHIELDHIT or DUMMY + Simulation: + type: object + properties: + title: + type: string + description: custom title set to for this simulation + job_id: + type: string + description: id of the job + start_time: + type: string + description: starting time of the simulation + end_time: + type: string + description: ending time of the simulation - if it is still running the value is NULL + metadata: + $ref: '#/components/schemas/Metadata' + Input: + oneOf: + - type: object + properties: + input_json: + type: object + description: specifies input json + required: + - input_json + - type: object + properties: + input_files: + type: object + properties: + beam.dat: + type: string + detect.dat: + type: string + geo.dat: + type: string + mat.dat: + type: string + description: specifies input files + required: + - input_files + JobsRequest: + type: object + properties: + ntasks: + type: integer + description: specifies number of parallel tasks to be run, default is maximum available for '/jobs/direct' and 1 for '/jobs/batch' + sim_type: + type: string + enum: [shieldhit, dummy] + description: specifies simulator type + title: + type: string + description: custom title set to for this simulation, default is workspace + input_type: + type: string + enum: [editor, files] + description: specifies input type + # oneOf: + # - input_files: + # type: array + # items: + # type: object + # properties: + # beam.dat: + # type: string + # detect.dat: + # type: string + # geo.dat: + # type: string + # mat.dat: + # type: string + # description: is required if input_type is files + # - input_json: + # type: object + # description: input_json is required if input_type is editor + required: + - ntasks + - input_type + - sim_type + BatchOptions: + type: object + properties: + batch_options: + type: object + description: available options can be found here: https://slurm.schedmd.com/sbatch.html. NOTE: if batch_options is not provided, SLURM will run both scripts with default settings + properties: + cluster_name: + type: string + description: stands for the cluster to be used to run simulation; it has to be one of available clusters for the user - check '/user/clusters' endpoint description; if it is not provided or provided cluster name is incorrect, first available cluster will be used + array_options: + type: object + description: dictionary of command line options used while running files with sbatch. Pairs should be option name as key and option value as value. If 2 same parameters are specified in both options and coresponding header, parameters from header will be ignored because command line options are more important for the SLURM + array_header: + type: string + description: header for files run on SLURM with sbatch + collect_options: + type: object + description: dictionary of command line options used while running files with sbatch. Pairs should be option name as key and option value as value. If 2 same parameters are specified in both options and coresponding header, parameters from header will be ignored because command line options are more important for the SLURM + collect_header: + type: string + description: header for files run on SLURM with sbatch + JobTaskStatus: + type: object + description: error, logfiles, input_files, input_json and results are deprecated: error is now sent as message, logfiles - accessed via /logfiles endpoint, results - accessed via /results endpoint, input_files and input_json - accessed via /inputs endpoint. + properties: + task_state: + type: string + enum: [PENDING, RUNNING, COMPLETED, FAILED] + description: task state + simulated_primaries: + type: integer + description: primaries already calculated by the task + requested_primaries: + type: integer + description: primaries to calculate for this task + last_update_time: + type: string + description: last time when task was updated + estimated_time: + type: object + description: is returned only, when task_state is RUNNING - not always because estimation is prepared after some period of time + properties: + hours: + type: integer + minutes: + type: integer + seconds: + type: integer + JobsDirectDeleteResponse: + type: object + properties: + message: + type: string + merge: + type: object + properties: + message: + type: string + job_state: + type: string + enum: [PENDING, RUNNING, COMPLETED, FAILED] + tasks: + type: array + items: + $ref: '#/components/schemas/CanceledTaskStatus' + CanceledTaskStatus: + type: object + properties: + message: + type: string + task_state: + type: string + enum: [PENDING, RUNNING, COMPLETED, FAILED] + Estimators: + type: object + description: dictionary of estimators with names as keys and values as dictionaries with estimator parameters + UpdateJobs: + type: object + required: + - sim_id + properties: + sim_id: + type: integer + description: integer, primary key of simulation in database + job_dir: + type: string + array_id: + type: integer + collect_id: + type: integer + task_state: + enum: [FAILED] + log: + type: object + description: dict that will be uploaded as error log to database +security: + - basicAuth: [] # <-- use the same name here diff --git a/docs/backend/persistency.md b/docs/backend/persistency.md new file mode 100644 index 0000000..20aecd3 --- /dev/null +++ b/docs/backend/persistency.md @@ -0,0 +1,349 @@ +# Persistency storage + +## Data model + +We have following data model, implemented in `yaptide/persistence/models.py`: + +Simulation model and dependent classes: +```mermaid +classDiagram + class SimulationModel { + id: int + job_id: str + user_id: int + start_time: datetime + end_time: datetime + title: str + platform: str + input_type: str + sim_type: str + job_state: str + tasks + estimators + } + + class CelerySimulationModel { + id: int + merge_id: str + } + + class BatchSimulationModel { + id: int + cluster_id: int + job_dir: str + array_id: int + collect_id: int + } + + class TaskModel { + id: int + simulation_id: int + task_id: int + requested_primaries: int + simulated_primaries: int + task_state: str + estimated_time: int + start_time: datetime + end_time: datetime + platform: str + last_update_time: datetime + } + + class CeleryTaskModel { + id: int + celery_id: str + } + + class BatchTaskModel { + id: int + } + + class InputModel { + id: int + simulation_id: int + compressed_data: bytes + data + } + + class EstimatorModel { + id: int + simulation_id: int + name: str + compressed_data: bytes + data + } + + class PageModel { + id: int + estimator_id: int + page_number: int + compressed_data: bytes + data + } + + class LogfilesModel { + id: int + simulation_id: int + compressed_data: bytes + data + } + + SimulationModel <|-- CelerySimulationModel + SimulationModel <|-- BatchSimulationModel + TaskModel <|-- CeleryTaskModel + TaskModel <|-- BatchTaskModel + SimulationModel "1" *-- "0..*" TaskModel + SimulationModel "1" *-- "0..*" EstimatorModel + EstimatorModel "1" *-- "0..*" PageModel + SimulationModel "1" *-- "0..*" LogfilesModel + SimulationModel *-- InputModel +``` + +other classes we use are: + +```mermaid +classDiagram + class UserModel { + id: int + username: str + auth_provider: str + simulations + } + + class YaptideUserModel { + id: int + password_hash: str + } + + class KeycloakUserModel { + id: int + cert: str + private_key: str + } + + class ClusterModel { + id: int + cluster_name: str + simulations + } + + UserModel <|-- YaptideUserModel + UserModel <|-- KeycloakUserModel +``` + +We've been too lazy to write down the mermaid code for these diagrams, but ChatGPT nowadays does a good job on that. +Whenever you need to update the diagrams, just copy the code from the `yaptide/persistence/models.py` file and ask ChatGPT to generate the diagram for you. + +## Database + +Production version uses PostgreSQL database, while in the unit tests suite we use SQLite in-memory database. + +Sometimes it may be convenient to connect to the production DB from outside the container, e.g. to check the content of the database. +Then you can use the following command to get the DB URL. + +```shell +docker exec -it yaptide_flask bash -c "cd /usr/local/app && python -c 'from yaptide.application import create_app; app = create_app(); app.app_context().push() or print(app.extensions[\"sqlalchemy\"].engine.url.render_as_string(hide_password=False))'" +``` + +The code above is implemented as a handy onliner, the code may look tricky, especially the `app.app_context().push() or` part. +The reason for that hacking is simple. Regular methods to get the DB URL require the application context. This is usually achieved using `with app.app_context():` construct, which is not possible in the oneliner. + +Knowing the DB URL, you can connect to the DB using any DB client, e.g. `psql` or `pgadmin`. You can also use the `db_manage.py` script from the `yaptide/admin` directory. For example, to list all users in the DB, you can use the following command from outside the container: + +```shell +FLASK_SQLALCHEMY_DATABASE_URI=postgresql+psycopg://yaptide_user:yaptide_password@localhost:5432/yaptide_db ./yaptide/admin/db_manage.py list-users +``` + +This is equivalent to the following command executed inside the container: + +```shell +docker exec -it yaptide_flask ./yaptide/admin/db_manage.py list-users +``` + +## Developing model + +In Yaptide flask-migrate is responsible for modyfing database after each change to `models.py` and keeping track of versions of database (new version comes after each modification of models.py). + +### Development steps +For development - running yaptide_postgres in docker is required (Flask-migrate can be used on sqlite daatbase we use in development but it's postgres database on production we want to migrate). It's recommended to do development on local machine. + +1. Make sure all poetry dependencies are installed. Run `poetry shell` in terminal. +2. Calling `flask db` commands will require `FLASK_SQLALCHEMY_DATABASE_URI` variable to be defined before each execution: + + - The general pattern for `FLASK_SQLALCHEMY_DATABASE_URI` is taken from docker-compose (there is only `postgres` changed to localhost or 127.0.0.1 ): + + `FLASK_SQLALCHEMY_DATABASE_URI=postgresql+psycopg://${POSTGRES_USER:-yaptide_user}:${POSTGRES_PASSWORD:-yaptide_password}@localhost:5432/${POSTGRES_DB:-yaptide_db}` + + e.g. for local development `FLASK_SQLALCHEMY_DATABASE_URI=postgresql+psycopg://yaptide_user:yaptide_password@localhost:5432/yaptide_db` will be put before each `flask db` call. For local development it can be exported as variable but it's not recommended for environments where username, password are sensitive information. + + **From now each command in this docummentation containing `flask db` should be called with `FLASK_SQLALCHEMY_DATABASE_URI`**. + +3. Now it's time to prepare local/development database for development of models.py and creation of migration script. + + - In `docker-compose.yml` edit database service to use volume with different name, this will create new volume and old one won't get deleted. Run `scripts/start_with_docker.sh`. This will create database that for sure reflects what's in models.py. Then to mark database with version from migrations/versions, run `flask --app yaptide.application db stamp head`. This will save id of newest version of database in alembic_version table. **Be cautios as this option is only for development on local machine.** + +4. Do modifications in `models.py`. +5. Run `flask --app yaptide.application db migrate`. +6. There will be generated migration file in migrations/versions. Name of the newest file is displayed in output of above command. +7. **IMPORTANT!** Check the file carefully. For example there might be some `None` values which needs to be changed. + - script that adds CASCADE option to foreign constraint at first looks like this + ``` + def upgrade(): + # ### commands auto generated by Alembic - please adjust! ### + with op.batch_alter_table('Task', schema=None) as batch_op: + batch_op.drop_constraint('Task_simulation_id_fkey', type_='foreignkey') + batch_op.create_foreign_key(None, 'Simulation', ['simulation_id'], ['id'], ondelete='CASCADE') + ``` + In this case change None to 'Task_simulation_id_fkey'. +8. Run `flask --app yaptide.application db upgrade` to apply migration script. +9. To undo changes and go back to previous version run `flask --app yaptide.application db downgrade`. +10. Commit and push script and modification to models.py. + +### Testing migration script with copy of production volume + +1. If there is testing environment other than local - pull changes. +2. Copy volume ("data") of production postgres database and save it under different name. +3. In `docker-compose.yml` modify database configuration to use this volume. +4. Run `scripts/start_with_docker_develop.sh`, to run backend and additionaly pgadminer tool (see section Using pgadminer). +5. Again prepare `FLASK_SQLALCHEMY_DATABASE_URI` like above and use it together with each `flask db` command. +6. Run `flask --app yaptide.application db upgrade`. +7. Check also `flask --app yaptide.application db downgrade`, then do upgrade again. All should execute without errors. +8. Do manual testing. Check functionalities, some might be unnecessary to check depending which part in `models.py` was changed: + - logging in and out + - Loading simulation results, input files, logs. + - Submiting new simulation + - operations contained in `admin/db_manage.py` + +### Migrating production +1. Run git pull on master. +2. Backup the production database + Before applying any migrations, create a backup of the live database: + - ` + pg_dump -U -h -d > backup.sql + ` + + Alterniative is making copy of volume: + - ` + docker run --rm -v yaptide_data:/var/lib/postgresql/data -v /home/ubuntu/backup:/backup busybox tar czf /backup/yaptide_data_backup.tar.gz -C /var/lib/postgresql/data . + ` +3. Again prepare `FLASK_SQLALCHEMY_DATABASE_URI` like above and use it together with each `flask db` command. +4. Applying the migration in production + There are two options for applying the migration: + + Option 1: Execute from outside the Docker container + + - ` + FLASK_SQLALCHEMY_DATABASE_URI=postgresql+psycopg://:@:5432/db_name flask --app yaptide.application db upgrade + ` + + Option 2: Execute from inside the Flask container + + Access the container and run the upgrade: + - ` + docker exec -it bash + ` + - ` + FLASK_SQLALCHEMY_DATABASE_URI=postgresql+psycopg://:@:5432/db_name flask --app yaptide.application db upgrade + ` +5. Rollback strategy + In case of any issues, you can revert the changes by running: + ` + flask --app yaptide.application db downgrade + ` + Post-migration testing + Perform manual tests on the production system: + + - Verify logging in and out. + - Check simulation submissions. + - Ensure any functionality affected by the migration is working. + + +6. In case of restoring database from backup, run: + - ` + psql -U -h -c "DROP DATABASE IF EXISTS ;" + ` + + - ` + psql -U -h -c "CREATE DATABASE ;" + ` + + - ` + psql -U -h -d -f backup.sql + ` + + If copy of volume was made instead of backup.sql, run: + - ` + docker run --rm -v yaptide_data:/var/lib/postgresql/data -v /home/ubuntu/backup:/backup busybox tar xzf /backup/yaptide_data_backup.tar.gz -C /var/lib/postgresql/data + ` + + + +### Using pgadminer +Pgadminer is tool that lets user browse database through graphical interface. It can help with veryfication, testing and troubleshooting during migration. To run pgadminer alongside other containers run script: `scripts/start_with_docker_develop.sh`. If executed locally it can be accessed from browser with address: `localhost:9999`. When running from remote the tunnel connection is requred. Run: +``` +ssh -L 9999:localhost:9999 +``` +then open in browser localhost:9999. Log in with credentials set in compose file. rightclick on servers -> register -> server -> fill necessary fields general and connection tabs. + +## commands in db_manage.py + +The `db_manage.py` script provides several commands to manage the database. Below is a list of available commands along with their arguments and options: + +- **list_users** + - Printed columns: `username`, `auth_provider` + - Options: + - `-v`, `--verbose` + +- **add_user** + - Arguments: + - `name` + - Options: + - `--password` (default: '') + - `-v`, `--verbose` + +- **update_user** + - Arguments: + - `name` + - Options: + - `--password` (default: '') + - `-v`, `--verbose` + +- **remove_user** + - Arguments: + - `name` + - `auth_provider` + +- **list_tasks** + - Printed columns: `simulation_id`, `task_id`, `task_state`, `username` + - Options: + - `--user` + - `--auth-provider` + +- **remove_task** + - Arguments: + - `simulation_id` + - `task_id` + - Options: + - `-v`, `--verbose` + +- **list_simulations** + - Printed columns: `id`, `job_id`, `start_time`, `end_time`, `username` + - Options: + - `-v`, `--verbose` + - `--user` + - `--auth-provider` + +- **remove_simulation** + - Arguments: + - `simulation_id` + - Options: + - `-v`, `--verbose` + +- **add_cluster** + - Arguments: + - `cluster_name` + - Options: + - `-v`, `--verbose` + +- **list_clusters** + - Columns: `id`, `cluster_name` diff --git a/docs/backend/states.md b/docs/backend/states.md new file mode 100644 index 0000000..4e6bca4 --- /dev/null +++ b/docs/backend/states.md @@ -0,0 +1,38 @@ +# JOBS & TASKS + +Each simulation consists of 1 job and 1 or multiple tasks. + +## States + +All jobs and tasks which are part of any simulation are in some state: + + * `UNKNOWN` - it is used only for jobs which are not yet submitted but was created in the database, so can be fetched by UI. Simulation with job in this state has no tasks. Simulation with job in this state cannot be canceled. + * `PENDING` - jobs and tasks in this state are successfully submitted and are waiting for execution. + * `RUNNING` - jobs and tasks in this state are currently executing. + * `COMPLETED` - jobs and tasks in this state are successfully completed and they cannot be canceled. + * `FAILED` - jobs and tasks in this state failed and they cannot be canceled. + * `CANCELED` - jobs and tasks in this state are canceled. + + +Diagram below shows possible transitions of states. + +```mermaid +--- +title: Job states +--- +stateDiagram-v2 + [*] --> UNKNOWN + + UNKNOWN --> PENDING + PENDING --> RUNNING + RUNNING --> COMPLETED + COMPLETED --> [*] + + RUNNING --> FAILED + FAILED --> [*] + + PENDING --> CANCELED + RUNNING --> CANCELED + CANCELED --> [*] + +``` diff --git a/docs/backend/swagger.md b/docs/backend/swagger.md new file mode 100644 index 0000000..98cc7cf --- /dev/null +++ b/docs/backend/swagger.md @@ -0,0 +1 @@ +!!swagger openapi.yaml!! \ No newline at end of file diff --git a/docs/backend/using_docker.md b/docs/backend/using_docker.md new file mode 100644 index 0000000..59a9a78 --- /dev/null +++ b/docs/backend/using_docker.md @@ -0,0 +1,51 @@ +# Using Docker + +You can build and run the app using docker compose. Docker deploy is similar to production deploy, so it's a good way to test the app. +To facilitate app development you can use following scripts to deploy app using docker containters: + +=== "Linux" + ```bash + scripts/start_with_docker.sh + ``` + +=== "Windows (PowerShell)" + ```powershell + scripts/start_with_docker.ps1 + ``` + +The script will build the app and run it in the background. The building is equivalent to running the following command: + +```shell +docker compose up --build --detach +``` + +If you have docker engine v25 (released 2024-01-19) or newer the you can profit from fast starting time and use following command: + +```shell +docker compose -f docker-compose.yml -f docker-compose.fast.yml up --build --detach +``` + +The script `scripts/start_with_docker.*` will use the fastest way to start the app. + +Once it's running, the app will be available at [http://localhost:5000](http://localhost:5000). If you get an error saying the container name is already in use, stop and remove the container and then try again. + +When you're ready to stop the containers, use the following command: + +```shell +docker compose down +``` + +## Docker database + +Now registering, updating and deleting users is available with the use of `db_manage.py` located in `yaptide/admin` folder. + +Once docker compose is running, you can use the following command, to get more information: +``` +docker exec -w /usr/local/app/ yaptide_flask ./yaptide/admin/db_manage.py --help +``` + +To add an user run: + +```bash +docker exec -w /usr/local/app/ yaptide_flask ./yaptide/admin/db_manage.py add-user admin --password password +``` diff --git a/docs/converter/index.md b/docs/converter/index.md new file mode 100644 index 0000000..af23ad3 --- /dev/null +++ b/docs/converter/index.md @@ -0,0 +1,5 @@ +# Converter + +Git link: [https://github.com/yaptide/converter](https://github.com/yaptide/converter) + + * [Readme](readme.md) \ No newline at end of file diff --git a/docs/converter/readme.md b/docs/converter/readme.md new file mode 100644 index 0000000..89f056d --- /dev/null +++ b/docs/converter/readme.md @@ -0,0 +1,51 @@ +# Yet Another Particle Transport IDE - converter + +The Converter of the project file (JSON file generated by the frontend part) into a set of input files for particle transport simulators: + +- SHIELD-HIT12A (beam.dat, mat.dat, geo.dat and detect.dat). +- Fluka + +## Installation + +Project make use of poetry for dependency management. If you do not have it installed, check official [poetry installation guide](https://python-poetry.org/docs/). +Project is configured to create virtual environment for you, so you do not need to worry about it. +Virtual environment is created in `.venv` folder in the root of the project. + +To install the project clone the repository and run the following command in the project directory: + +```shell +poetry install --without=test +``` + +This will result in command `yaptide-converter` available inside the virtual environment. +It can be accessed outside virtual environment by running `poetry run yaptide-converter`. +Alternatively, you can run `poetry shell` to enter virtual environment or check more examples in [Poetry documentation section: Activating the virtual environment](https://python-poetry.org/docs/basic-usage#activating-the-virtual-environment). + +## Usage + +The converter comes with a command line application. +It is capable of transforming the JSON project file (generated in the yaptide web interface) into a set of valid input files for SHIELD-HIT12A. + +To run the converter use the following command: + +```bash +python converter/main.py tests/shieldhit/resources/project.json workspace +``` + +## Testing + +To run the unit tests, you need to install test dependencies with: + +```shell +poetry install +``` + +Then you can run the tests with: + +```shell +poetry run pytest +``` + +## Credits + +This work was partially funded by EuroHPC PL Project, Smart Growth Operational Programme 4.2 diff --git a/docs/documentation/index.md b/docs/documentation/index.md new file mode 100644 index 0000000..3060ce7 --- /dev/null +++ b/docs/documentation/index.md @@ -0,0 +1,96 @@ +# Technical documentation of the project + +# Developer documentation + +The documentation indended for developes is located in the `docs` folder. +We use [mkdocs](https://www.mkdocs.org) with [material for mkdocs](https://squidfunk.github.io/mkdocs-material/) customisation to generate the documentation in the HTML format. + +## Documentation structure + +### Technical documentation + +Technical documentation is written in markdown format and can be found in the [docs folder](https://github.com/yaptide/yaptide/tree/master/docs). + +### API reference + +The [API reference](swagger.md) is generated from the [swagger](https://swagger.io) yaml file. +The [swagger.yaml](https://github.com/yaptide/yaptide/blob/master/yaptide/static/openapi.yaml) file is located in the [yaptide/static](https://github.com/yaptide/yaptide/tree/master/yaptide/static) folder. This is the location from which Flask serve it when the backend is deployed. + +The HTML API documentation is rendered using [render_swagger](https://github.com/bharel/mkdocs-render-swagger-plugin) mkdocs plugin installed as [mkdocs-render-swagger-plugin](https://pypi.org/project/mkdocs-render-swagger-plugin/) pip package. +Its a bit abandoned project but it seems to be the only solution to generate static HTML from swagger yaml file. +The swagger documenation can be viewed locally by deploying backend and connecting to the backend server via `/api/docs` endpoint. +By using the `mkdocs-render-swagger-plugin` we can serve the documenation statically on github pages. +This way users may read the documenation without deploying the backend. + +The `mkdocs-render-swagger-plugin` expects the swagger yaml file to be located in the [docs folder](https://github.com/yaptide/yaptide/tree/master/docs). Therefore we modified the [docs/gen_ref_pages.py](https://github.com/yaptide/yaptide/blob/master/docs/gen_ref_pages.py) script to copy the swagger yaml file from the Flask static directory to the docs folder. The copy happens whenever the `mkdocs build` or `mkdocs serve` command is run. + +### Code reference + +The code reference is generated using [mkdocs-gen-files](https://github.com/oprypin/mkdocs-gen-files) mkdocs plugin. +We have a [docs/gen_ref_pages.py](https://github.com/yaptide/yaptide/blob/master/docs/gen_ref_pages.py) scripts that crawls through all Python files in the [yaptide folder](https://github.com/yaptide/yaptide/tree/master/yaptide) directory. Then it generates on-the-fly a markdown documentation from docstrings for each module, class and function. Also a on-the-fly `reference/SUMMARY.md` file is generated using [mkdocs-literate-nav](https://github.com/oprypin/mkdocs-literate-nav) mkdocs plugin. This file serves as left-side menu for the code reference. + +### Tests coverage + +The tests coverage is generated using [mkdocs-coverage](https://github.com/pawamoy/mkdocs-coverage) mkdocs plugin. This plugin expects a pytest coverage report in the `htmlcov` directory. + +## Github Pages deployment of the documentation + +Github pages deployment is done using [GitHub Actions docs workflow](https://github.com/yaptide/yaptide/blob/master/.github/workflows/docs.yml). +It deploys new version of the documentation whenever a new commit is pushed to the `master` branch. +The deployment includes generation of test coverage report and API reference documentation. + +## Local deployment of the documentation + +### Prerequisites + +First, user needs to install [poetry](https://python-poetry.org). +Then, user needs to install the dependencies for the backend and the documentation: + +```bash +poetry install --only main,docs +``` + +### Building the documentation + +To build the documentation run the following command: + +```bash +poetry run mkdocs build +``` + +this will generate the documentation in the `site` folder. + +To serve the documentation locally run the following command: + +```bash +poetry run mkdocs serve +``` + +This will start a local webserver on port 8000. The documentation can be viewed by opening the following url in a browser: http://localhost:8000 + +### Working with the technical documentation + +After modification of the markdown file the documenation served via `mkdocs serve` command will be updated automatically. + +### Working with the API reference + +After modification of the swagger yaml one needs to stop the `mkdocs serve` command and run it again. This is required as to re-generate the API reference documentation mkdocs needs to copy the swagger yaml file from the Flask static directory to the docs folder. +Please avoid modification and commiting of the swagger yaml file in the docs folder as it will be overwritten by the `mkdocs serve` command. + +### Working with the code reference + +After modification of the Python code one needs to stop the `mkdocs serve` command and run it again. + +### Working with the tests coverage + +To regeneate tests coverage one needs to run the following command: + +```bash +poetry run pytest --cov-report html:htmlcov --cov=yaptide +``` + +Note that this requires installation of dependencies for the backend and the tests: + +```bash +poetry install --only main,test +``` diff --git a/docs/frontend/authentication.md b/docs/frontend/authentication.md new file mode 100644 index 0000000..203ba3c --- /dev/null +++ b/docs/frontend/authentication.md @@ -0,0 +1,74 @@ +# Sequence diagrams + +## Keycloak + +Overview of login and logout process using keycloak + +```mermaid +sequenceDiagram + autonumber + actor User + participant AuthService + participant Keycloak + participant Backend + + User ->> AuthService: Request login + AuthService ->> Keycloak: Redirect to keycloak login + User ->> Keycloak: Login with credentials + Keycloak ->> AuthService: Return authenticated token + AuthService ->> AuthService: Check token for access to yaptide + opt user has access + AuthService ->> Backend: Verify token with backend (POST /auth/keycloak) + Backend ->> Keycloak: Verify if token is correct + opt token verified + Keycloak ->> Backend: Signature verified + Backend ->> AuthService: Response with accessExp + AuthService ->> AuthService: Set token refresh interval based on accessExp + AuthService ->> User: Provide auth context + end + opt signature expired or invalid token or keycloak connection error + Backend ->> AuthService: Raise exception Forbidden (403) + end + end + opt user doesn't have access + AuthService ->> User: Message with access denied + end + loop Refresh backend connection every 3 minutes + AuthService ->> Backend: Refresh token (GET auth/refresh) + Backend ->> AuthService: Response with new backend access token in cookies + end + loop Refresh token every 1/3 of tokens lifetime + AuthService ->> Keycloak: Refresh token + Keycloak ->> AuthService: Updated token + end + User ->> AuthService: Logout + AuthService ->> Backend: Invalidate session (DELETE /auth/logout) + Backend ->> AuthService: Response with cookies deleted + AuthService ->> Keycloak: Logout + AuthService ->> User: Clear user data +``` + +## Non-Keycloak + +Overview of login and logout process while in demo or dev modes + +```mermaid +sequenceDiagram + autonumber + participant User + participant AuthService + participant Backend + + User ->> AuthService: Request Login + AuthService ->> Backend: Validate Credentials (POST /auth/login) + Backend ->> AuthService: Response with accessExp and set access and refresh tokens in cookies + AuthService ->> User: Provide Auth Context + loop Refresh backend connection every 3 minutes + AuthService ->> Backend: Refresh token (GET auth/refresh) + Backend ->> AuthService: Response with new backend access token in cookies + end + User ->> AuthService: Logout + AuthService ->> Backend: Invalidate session (DELETE /auth/logout) + Backend ->> AuthService: Response with cookies deleted + AuthService ->> User: Clear User Data +``` diff --git a/docs/frontend/examples.md b/docs/frontend/examples.md new file mode 100644 index 0000000..e73323e --- /dev/null +++ b/docs/frontend/examples.md @@ -0,0 +1,18 @@ +# Examples directory + +This directory contains examples of simulation projects for the editor. To add a new example, create a new JSON file in this directory. The naming convention for the file is as follows: + +- ex{number of the example}.json + +Please note that the examples must be numbered consecutively. + +After creating new example file add it to examplesMap.json file located in +```bash +src/examples/examplesMap.json +``` +Example mapping shown be as followed: + +- "Proton pencil beam in water" : "ex1" + +Key specifies how example will be shown in front-end app and value is the name of corresponding example file. +Add entry to section of simulator for which examples is created. \ No newline at end of file diff --git a/docs/frontend/for_developers.md b/docs/frontend/for_developers.md new file mode 100644 index 0000000..fa34161 --- /dev/null +++ b/docs/frontend/for_developers.md @@ -0,0 +1,126 @@ +# yaptide web interface + +## For users + +The development version is unstable, without many features and with a lot of bugs. +It is released automatically after every commit to the main branch of this repository and is available for testing here: + + +The stable version is not released yet, have patience. + +### Loading a project file with results from a URL + +You can load a project file with results from a URL by adding `?` to the end of the editor URL. + +```txt +https://? +``` + +Example: + +To see the results, you need to navigate to the `Results` tab in the main menu. + +## For developers + +Start by downloading submodules: + +```bash +git submodule update --init --recursive +``` + +Before starting the local version of the web interface, you need to install the necessary dependencies by typing the command: + +```bash +npm install +``` + +To run the app in the development mode, type: + +```bash +npm run start +``` + +Then open [http://localhost:3000/web_dev](http://localhost:3000/web_dev) to view it in the web browser. + +The page will reload if you make edits. + +### App configuration + +Currently, app can be configured by setting the following environment variables in .env file in the main project directory (same as `package.json` is located). + +#### Setting communication with backend +UI can be deployed on different machine (with different IP) than a backend. During build phase the UI can be configured to talk to given backend instance via `REACT_APP_BACKEND_URL` environmental variable. To adjust it put following line in the `.env` file: + +If the backend is deployed as a set of docker containers, then Flask is listening on port **6000** for HTTP requests (HTTPS is supported only via NGINX proxy) on a host called `yaptide_flask`. +Additionally, the main NGINX proxy server listens on port **5000** for plain HTTP and **8443** for HTTPS. Relevant configuration can be found in this [config file](https://github.com/yaptide/yaptide/blob/master/nginx.conf) of backend + +**Make sure that both url for backend in `.env` and url typed in browser's address bar contain same domain part: either localhost (recommended) or 127.0.0.1. When e.g. in browser frontend will be opend from localhost:3000 and REACT_APP_BACKEND_URL will be set to http://127.0.0.1:5000 the difference in domains will cause browser to block setting access_token and refresh_token returned from backend as part of a response to login request. It's because cookie option samesite='Lax' set in backend. Without those cookies each refresh request will fail.** + +**When opening yaptide that runs from docker in chromium based browsers set ```REACT_APP_BACKEND_URL=https://localhost:8443``` in `.env`. Otherwise problem described above will appear. There are some differences how each browser implements security policies and those constantly change** + + +#### Other configuration options are: +- `REACT_APP_TARGET` - if set to `demo`, app will not require authentication and will be preloaded with demo results (this version is available at ) +- `REACT_APP_ALT_AUTH` - if set to `plg`, app will use plgrid authentication +- `REACT_APP_DEPLOYMENT` - if set to `dev`, configuration wil be editable from the browser console. For example, you can change the backend URL by typing `window.BACKEND_URL="http://mynew.url"` in the browser console. + +### Useful commands + +In order to easy configure the app, `cross-env` package for setting env is used with custom npm scripts. + +| Command | Description | +| --------------------- | --------------------------------------------------------------------------- | +| `npm run start` | Runs the app in the development mode. | +| `npm run build` | Builds the app for production to the `build` folder. | +| `npm run start-demo` | Runs the app in the development mode with demo results. | +| `npm run build-demo` | Builds the app for production to the `build` folder with demo results. | +| `npm run start-plg` | Runs the app in the development mode with plgrid authentication. | +| `npm run build-plg` | Builds the app for production to the `build` folder with plgrid authentication. | +| `npm run format` | Runs the formatter. | +| `npm run test` | Launches the test runner in the interactive watch mode. | + +For more commands, see `package.json`. + +### Building the app using the Dockerfile + +To build the docker image, type: + +```bash +docker build -t ui . +``` + +Then you can run the docker container named `ui` and serve the UI on port 80: + +```bash +docker run --rm -d -p 80:80/tcp --name ui ui +``` + +## Requirements + +- Node.js 20.x or higher +- Python 3.9+ +- pip and venv + +## Private docker image generated in the GHCR + +The docker image is generated automatically after every commit to the main branch of this repository. +The package is here + +The command below will run the docker container named `ui` and serve the UI on port 80: + +```bash +docker run --rm -d -p 80:80/tcp --name ui ghcr.io/yaptide/ui-web:master +``` + +## Credits + +This project adapts source code from the following libraries: + +- CSG javascript library + - parts of its code copied into `src/ThreeEditor/js/libs/csg/` + - adapted by adding types in a separate file +- ThreeJS Editor + - most of its code is copied from [mrdoob's GitHub repo](https://github.com/mrdoob/three.js/tree/r132/editor) into `src/ThreeEditor`, starting from v.132 + - the copied code is heavily adapted to "yaptide needs" + +This work was partially funded by EuroHPC PL Project, Smart Growth Operational Programme 4.2 diff --git a/docs/frontend/index.md b/docs/frontend/index.md index d6319a5..ae3c794 100644 --- a/docs/frontend/index.md +++ b/docs/frontend/index.md @@ -1 +1,9 @@ -# Technical documentation of the project \ No newline at end of file +# Ui (frontend) + +Github link: [https://github.com/yaptide/ui](https://github.com/yaptide/ui) + +The documenation contains: + + * [For developers](for_developers.md) - Readme from Ui + * [Authetication](authentication.md) - Description on Authentication mechanisms + * [Examples](examples.md) - Examples \ No newline at end of file diff --git a/docs/other/index.md b/docs/other/index.md deleted file mode 100644 index 3db9bed..0000000 --- a/docs/other/index.md +++ /dev/null @@ -1 +0,0 @@ -### Editing documentation diff --git a/mkdocs.yml b/mkdocs.yml index cc09496..5efb7d3 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -12,9 +12,22 @@ theme: nav: - Backend: - Overview: backend/index.md + - For developers: backend/for_developers.md + - Using docker: backend/using_docker.md + - API: backend/swagger.md + - Jobs and tasks: backend/states.md + - Persistent storage: backend/persistency.md + - Docker images on GHCR: backend/ghcr_packages.md - Frontend: - Overview: frontend/index.md -- Editing documentation: other/index.md + - For developers: frontend/for_developers.md + - Examples: frontend/examples.md + - Authentication: frontend/authentication.md + - Examples: frontend/examples.md +- Converter: + - Overview: converter/index.md + - Readme: converter/readme.md +- Editing documentation: documentation/index.md plugins: - search