Skip to content

Commit

Permalink
Start of Merge Conflict Resolution
Browse files Browse the repository at this point in the history
  • Loading branch information
sweep-support committed Jan 24, 2024
2 parents 78c3073 + 9f1c7bc commit 24eefe1
Show file tree
Hide file tree
Showing 44 changed files with 16,451 additions and 6,291 deletions.
78 changes: 41 additions & 37 deletions .github/Architecture.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,50 @@
# Architecture

## Backend Architecture
## Training Architecture

```
📦 backend
| |- 📂 ml:
📦 training
| |- 📂 training:
| | |- 📂 routes:
| | | |- 📂 datasets:
| | | | |- 📂 default:
| | | | | |- 📜 columns.py
| | | | | |- 📜 __init__.py
| | | | | |- 📜 schemas.py
| | | | |- 📜 __init__.py
| | | |- 📂 tabular:
| | | | |- 📜 __init__.py
| | | | |- 📜 tabular.py
| | | | |- 📜 schemas.py
| | | |- 📂 image:
| | | | |- 📜 image.py
| | | | |- 📜 __init__.py
| | | | |- 📜 schemas.py
| | | |- 📜 __init__.py
| | | |- 📜 schemas.py
| | |- 📂 core:
| | | |- 📜 dataset.py : read in the dataset through URL or file upload
| | | |- 📜 __init__.py
| | | |- 📜 authenticator.py
| | | |- 📜 optimizer.py : what optimizer to use (ie: SGD or Adam for now)
| | | |- 📜 trainer.py
| | | |- 📜 dl_model.py : torch model based on user specifications from drag and drop
| | | |- 📜 criterion.py
| | |- 📜 settings.py
| | |- 📜 asgi.py
| | |- 📜 wsgi.py
| | |- 📜 __init__.py
| | |- 📜 ml_model_parser.py
| | |- 📜 ml_trainer.py : train a classical machine learning learning model on the dataset
| |- 📂 common:
| | |- 📜 ai_drive.py
| | |- 📜 preprocessing.py
| | |- 📜 email_notifier.py : Endpoint to send email notification of training results via API Gateway + AWS SES
| | |- 📜 default_datasets.py : store logic to load in default datasets from scikit-learn
| | |- 📜 dataset.py : read in the dataset through URL or file upload
| | |- 📜 constants.py : list of helpful constants
| | |- 📜 utils.py : utility functions that could be helpful
| | |- 📜 __init__.py
| | |- 📜 loss_functions.py : loss function enum
| | |- 📜 kernel.py
| | |- 📜 optimizer.py : what optimizer to use (ie: SGD or Adam for now)
| |- 📂 dl:
| | |- 📜 detection.py
| | |- 📜 dl_model_parser.py : parse the user specified pytorch model
| | |- 📜 dl_eval.py : Evaluation functions for deep learning models in Pytorch (eg: accuracy, loss, etc)
| | |- 📜 __init__.py
| | |- 📜 dl_model.py : torch model based on user specifications from drag and drop
| | |- 📜 dl_trainer.py : train a deep learning model on the dataset
| |- 📂 aws_helpers:
| | |- 📂 dynamo_db_utils:
| | | |- 📜 trainspace_db.py
| | | |- 📜 userprogress_db.py
| | | |- 📜 constants.py : list of helpful constants
| | | |- 📜 DynamoUnitTests.md
| | | |- 📜 dynamo_db_utils.py
| | |- 📜 __init__.py
| |- 📜 app.py : run the backend (entrypoint script)
| | |- 📜 urls.py
| |- 📜 environment.yml
| |- 📜 poetry.lock
| |- 📜 middleware.py
| |- 📜 __init__.py
| |- 📜 data.csv : data csv file for use in the playground
| |- 📜 epoch_times.csv
| |- 📜 manage.py
| |- 📜 docker-compose.yml
| |- 📜 cli.py
| |- 📜 docker-compose.prod.yml
| |- 📜 pytest.ini
| |- 📜 pyproject.toml
| |- 📜 README.md
| |- 📜 Dockerfile
```

## Frontend Architecture
Expand Down Expand Up @@ -219,5 +222,6 @@
| |- 📜 tsconfig.json
| |- 📜 pnpm-lock.yaml
| |- 📜 jest.config.js
| |- 📜 yarn.lock
```

7 changes: 5 additions & 2 deletions .github/workflows/node.js.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,7 @@ on:
- "frontend/**"
pull_request:
paths:
- "frontend/**"

- "frontend/**"
jobs:
build:
runs-on: ubuntu-latest
Expand Down Expand Up @@ -47,5 +46,9 @@ jobs:
- name: Lint checks
run: pnpm lint
working-directory: ./frontend

- name: Run depcheck
run: npx depcheck --ignore-bin-package
working-directory: ./frontend

# future: add yarn build once build errors are resolved
42 changes: 21 additions & 21 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
repos:
- repo: https://github.com/python-poetry/poetry
rev: 1.5.1
hooks:
- id: poetry-check
entry: sh -c 'cd backend && poetry check'

# - id: poetry-lock
# entry: sh -c 'cd backend && poetry lock'

- id: poetry-check
entry: sh -c 'cd aws_write && poetry check'

# - id: poetry-lock
# entry: sh -c 'cd aws_write && poetry lock'

- repo: https://github.com/gitguardian/ggshield
rev: v1.14.2
hooks:
- id: ggshield
language_version: python3
repos:
- repo: https://github.com/python-poetry/poetry
rev: 1.5.1
hooks:
- id: poetry-check
entry: sh -c 'cd backend && poetry check'

# - id: poetry-lock
# entry: sh -c 'cd backend && poetry lock'

- id: poetry-check
entry: sh -c 'cd aws_write && poetry check'

# - id: poetry-lock
# entry: sh -c 'cd aws_write && poetry lock'

- repo: https://github.com/gitguardian/ggshield
rev: v1.14.2
hooks:
- id: ggshield
language_version: python3
stages: [commit]
42 changes: 21 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Web Application where people new to Machine Learning can input a dataset and exp
Have the following installed first:

1. [Node.js v20 via NVM](https://github.com/nvm-sh/nvm#installing-and-updating) (Install nvm first, and then install node & npm using nvm)
1. [Mamba](https://github.com/conda-forge/miniforge#miniforge) (Make sure to install using the Miniforge distribution. On windows, remember to check the box that says that it will add mamba to path)
1. [Mamba](https://github.com/conda-forge/miniforge#miniforge) (Make sure to install using the Miniforge distribution. On windows, remember to check the box that says that it will add mamba to PATH)
1. [pip](https://pip.pypa.io/en/stable/installation/) (Is also automatically installed with Python via Python's installer, make sure this version of pip is installed globally)
1. [dlp-cli](https://github.com/DSGT-DLP/dlp-cli#readme) (We have our own cli!)
1. [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
Expand All @@ -25,7 +25,7 @@ Have the following installed first:
1. [Postman](https://www.postman.com/downloads/) - Extremely helpful for testing REST APIs
1. [Chrome](https://www.google.com/chrome/) - For Chrome developer tools
1. [Redux Devtools](https://chrome.google.com/webstore/detail/redux-devtools/lmhkpmbekcpmknklioeibfkpmmfibljd) - Helpful for debugging any Redux
1. [Docker](https://www.docker.com/) - For Docker images
1. [Docker](https://docs.docker.com/engine/install/) - For Docker images
1. [go](https://go.dev/doc/install) - In case if you ever need to contribute to the dlp-cli
1. VSCode Extensions:
1. [Github Copilot](https://marketplace.visualstudio.com/items?itemName=GitHub.copilot)
Expand Down Expand Up @@ -59,7 +59,21 @@ Run the following commands in the project directory (the root folder created aft
| Install/Update Frontend Packages | `dlp-cli frontend install` |
| Install/Update Backend Packages | `dlp-cli backend install` |

## 3. GitGuardian Pre-commit Check
## 3. To start on localhost

Run the following commands in the project directory (the root folder created after cloning):

| Action | Command |
| -------------------- | ------------------------ |
| Running the Frontend | `dlp-cli frontend start` |
| Running the Backend | `dlp-cli backend start` |

Make sure to run the above two commands in separate terminals. You should see these Terminal messages and be able to go to these URLs on success:

![](.github/readme_images/frontend_start.png)
![](.github/readme_images/backend_start.png)

## 4. GitGuardian Pre-commit Check

To install the GitGuardian cli and pre-commit, run

Expand All @@ -74,7 +88,7 @@ To protect our secrets, we use the GitGuardian ggshield pre-commit check to ensu
pre-commit install
```

You should get output like "pre-commit installed at .git/hooks/pre-commit". Login to GitGuardian to activate the pre-commit hook using
You should get output like `pre-commit installed at .git/hooks/pre-commit`. Login to GitGuardian to activate the pre-commit hook using

```sh
ggshield auth login
Expand All @@ -88,24 +102,10 @@ Access the VSCode command palette via `Ctrl+Shift+P`. Press `Python: Select Inte

Select the Python Interpreter named `dlp`.

## 4. To start on localhost

Run the following commands in the project directory (the root folder created after cloning):

| Action | Command |
| -------------------- | ------------------------ |
| Running the Frontend | `dlp-cli frontend start` |
| Running the Backend | `dlp-cli backend start` |

Make sure to run the above two commands in separate terminals. You should see these Terminal messages and be able to go to these URLs on success:

![](.github/readme_images/frontend_start.png)
![](.github/readme_images/backend_start.png)

## 5. AWS Setup
If you will be working on tasks that interface with AWS resources/services, please follow the below steps (please install AWS CLI using this [link](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) first):

1. Request an AWS Account for Deep Learning Playground by messaging Faris, Karthik, or Daniel in the DLP Discord. Please include your Github username along with your personal email account
1. Request an AWS Account for Deep Learning Playground by messaging Faris, Karthik, Daniel, or a Project Lead (@Project Lead) in the DLP Discord. Please include your Github username along with your personal email account and full name
1. Once an AWS Account has been created, you will receive an email from AWS that will require you to setup a password
1. When you login, you should be seeing that the account you're added under is `Data Science Initiative Inc`
1. Click on the dropdown to expand the `Data Science Initiative Inc` entry and select the `Command Line or programmatic access button`
Expand All @@ -120,9 +120,9 @@ If you will be working on tasks that interface with AWS resources/services, plea
````
1. Make sure you follow the instructions in the terminal to ensure your credentials are set correctly (eg: allow botocore to access data should be selected as "yes")
1. Run `cat ~/.aws/config` to look for the sso profile configured.
1. Run `export AWS_PROFILE=<sso_profile_name from step 6>`
1. **IMPORTANT:** Run `export AWS_PROFILE=<sso_profile_name from the previous step>`, for Linux and Mac, or `setx AWS_PROFILE <sso_profile_name from the previous step>` for Windows. Note that when you close and reopen your terminal, you will **need** to rerun this export command
Please message in the DLP Discord or view the [Bug Manual page](https://github.com/DSGT-DLP/Deep-Learning-Playground/wiki/Bug-Manual) if you have any difficulty/issue with these steps.
Please message in the DLP Discord or view the [Bug Manual page](https://github.com/DSGT-DLP/Deep-Learning-Playground/wiki/Bug-Manual) and [Documentation](https://www.notion.so/General-011ddb00fda146048ec1beb2d18c8abc) if you have any difficulty/issue with these steps.
# Architecture
Expand Down
1 change: 0 additions & 1 deletion aws_write/__init__.py

This file was deleted.

39 changes: 0 additions & 39 deletions aws_write/app.py

This file was deleted.

1 change: 0 additions & 1 deletion aws_write/aws_helpers/__init__.py

This file was deleted.

1 change: 0 additions & 1 deletion aws_write/aws_helpers/dynamo_db/__init__.py

This file was deleted.

12 changes: 0 additions & 12 deletions aws_write/aws_helpers/dynamo_db/constants.py

This file was deleted.

Loading

0 comments on commit 24eefe1

Please sign in to comment.