generated from Cloud-CV/EvalAI-Starters
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit 4602c17
Showing
46 changed files
with
1,769 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
# This workflow will install Python dependencies, run tests and lint with a single version of Python | ||
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions | ||
|
||
name: evalai-challenge | ||
on: | ||
push: | ||
branches: [challenge] | ||
pull_request: | ||
types: [opened, synchronize, reopened, edited] | ||
branches: [challenge] | ||
jobs: | ||
build: | ||
runs-on: ubuntu-20.04 | ||
steps: | ||
- uses: actions/checkout@v2 | ||
- name: Set up Python | ||
uses: actions/setup-python@v2 | ||
with: | ||
python-version: 3.7.5 | ||
- name: Install dependencies | ||
run: | | ||
python -m pip install --upgrade pip | ||
if [ -f github/requirements.txt ]; then pip install -r github/requirements.txt; fi | ||
- name: Validate challenge | ||
run: | | ||
python3 github/challenge_processing_script.py | ||
env: | ||
IS_VALIDATION: 'True' | ||
GITHUB_CONTEXT: ${{ toJson(github) }} | ||
GITHUB_AUTH_TOKEN: ${{ secrets.AUTH_TOKEN }} | ||
- name: Create or update challenge | ||
run: | | ||
python3 github/challenge_processing_script.py | ||
if: ${{github.event_name == 'push' && success()}} | ||
env: | ||
IS_VALIDATION: 'False' | ||
GITHUB_CONTEXT: ${{ toJson(github) }} | ||
GITHUB_AUTH_TOKEN: ${{ secrets.AUTH_TOKEN }} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
challenge_config.zip | ||
evaluation_script.zip | ||
|
||
# virtualenv | ||
env/ | ||
venv/ | ||
|
||
# text-editor related files | ||
.vscode/ | ||
|
||
# cache | ||
__pycache__ | ||
*.pyc |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,89 @@ | ||
## How to create a challenge on EvalAI? | ||
|
||
If you are looking for a simple challenge configuration that you can replicate to create a challenge on EvalAI, then you are at the right place. Follow the instructions given below to get started. | ||
|
||
## Directory Structure | ||
|
||
``` | ||
. | ||
├── README.md | ||
├── annotations # Contains the annotations for Dataset splits | ||
│ ├── test_annotations_devsplit.json # Annotations of dev split | ||
│ └── test_annotations_testsplit.json # Annotations for test split | ||
├── challenge_data # Contains scripts to test the evalautaion script locally | ||
│ ├── challenge_1 # Contains evaluation script for the challenge | ||
| ├── __init__.py # Imports the main.py file for evaluation | ||
| └── main.py # Challenge evaluation script | ||
│ └── __init__.py # Imports the modules which involve evaluation script loading | ||
├── challenge_config.yaml # Configuration file to define challenge setup | ||
├── evaluation_script # Contains the evaluation script | ||
│ ├── __init__.py # Imports the modules that involve annotations loading etc | ||
│ └── main.py # Contains the main `evaluate()` method | ||
├── logo.jpg # Logo image of the challenge | ||
├── submission.json # Sample submission file | ||
├── run.sh # Script to create the challenge configuration zip to be uploaded on EvalAI website | ||
└── templates # Contains challenge related HTML templates | ||
├── challenge_phase_1_description.html # Challenge Phase 1 description template | ||
├── challenge_phase_2_description.html # Challenge Phase 2 description template | ||
├── description.html # Challenge description template | ||
├── evaluation_details.html # Contains description about how submissions will be evalauted for each challenge phase | ||
├── submission_guidelines.html # Contains information about how to make submissions to the challenge | ||
└── terms_and_conditions.html # Contains terms and conditions related to the challenge | ||
├── worker # Contains the scripts to test evaluation script locally | ||
│ ├── __init__.py # Imports the module that ionvolves loading evaluation script | ||
│ └── run.py # Contains the code to run the evaluation locally | ||
``` | ||
|
||
## Create challenge using github | ||
|
||
1. Use this repository as [template](https://docs.github.com/en/free-pro-team@latest/github/creating-cloning-and-archiving-repositories/creating-a-repository-from-a-template). | ||
|
||
2. Generate your [github personal acccess token](https://docs.github.com/en/free-pro-team@latest/github/authenticating-to-github/creating-a-personal-access-token) and copy it in clipboard. | ||
|
||
3. Add the github personal access token in the forked repository's [secrets](https://docs.github.com/en/free-pro-team@latest/actions/reference/encrypted-secrets#creating-encrypted-secrets-for-a-repository) with the name `AUTH_TOKEN`. | ||
|
||
4. Now, go to [EvalAI](https://eval.ai) to fetch the following details - | ||
1. `evalai_user_auth_token` - Go to [profile page](https://eval.ai/web/profile) after logging in and click on `Get your Auth Token` to copy your auth token. | ||
2. `host_team_pk` - Go to [host team page](https://eval.ai/web/challenge-host-teams) and copy the `ID` for the team you want to use for challenge creation. | ||
3. `evalai_host_url` - Use `https://eval.ai` for production server and `https://staging.eval.ai` for staging server. | ||
|
||
5. Create a branch with name `challenge` in the forked repository from the `master` branch. | ||
<span style="color:purple">Note: Only changes in `challenge` branch will be synchronized with challenge on EvalAI.</span> | ||
|
||
6. Add `evalai_user_auth_token` and `host_team_pk` in `github/host_config.json`. | ||
|
||
7. Read [EvalAI challenge creation documentation](https://evalai.readthedocs.io/en/latest/configuration.html) to know more about how you want to structure your challenge. Once you are ready, start making changes in the yaml file, HTML templates, evaluation script according to your need. | ||
|
||
8. Commit the changes and push the `challenge` branch in the repository and wait for the build to complete. View the [logs of your build](https://docs.github.com/en/free-pro-team@latest/actions/managing-workflow-runs/using-workflow-run-logs#viewing-logs-to-diagnose-failures). | ||
|
||
9. If challenge config contains errors then a `issue` will be opened automatically in the repository with the errors otherwise the challenge will be created on EvalAI. | ||
|
||
10. Go to [Hosted Challenges](https://eval.ai/web/hosted-challenges) to view your challenge. The challenge will be publicly available once EvalAI admin approves the challenge. | ||
|
||
11. To update the challenge on EvalAI, make changes in the repository and push on `challenge` branch and wait for the build to complete. | ||
|
||
## Create challenge using config | ||
|
||
1. Fork this repository. | ||
|
||
2. Read [EvalAI challenge creation documentation](https://evalai.readthedocs.io/en/latest/configuration.html) to know more about how you want to structure your challenge. Once you are ready, start making changes in the yaml file, HTML templates, evaluation script according to your need. | ||
|
||
3. Once you are done making changes, run the command `./run.sh` to generate the `challenge_config.zip`. | ||
|
||
4. Upload the `challenge_config.zip` on [EvalAI](https://eval.ai) to create a challenge on EvalAI. Challenge will be available publicly once EvalAI Admin approves the challenge. | ||
|
||
5. To update the challenge on EvalAI, use the UI to update the details. | ||
|
||
## Test your evaluation script locally | ||
|
||
In order to test the evaluation script locally before uploading it to [EvalAI](https://eval.ai) server, please follow the below instructions - | ||
|
||
1. Copy the evaluation script i.e `__init__.py` , `main.py` and other relevant files from `evaluation_script/` directory to `challenge_data/challenge_1/` directory. | ||
|
||
2. Now, edit `challenge_phase` name, `annotation file` name and `submission file` name in the `worker/run.py` file to the challenge phase codename (which you want to test for), annotation file name in the `annotations/` folder (for specific phase) and corresponding submission file respectively. | ||
|
||
3. Run the command `python -m worker.run` from the directory where `annotations/` `challenge_data/` and `worker/` directories are present. If the command runs successfully, then the evaluation script works locally and will work on the server as well. | ||
|
||
## Facing problems in creating a challenge? | ||
|
||
Please feel free to open issues on our [GitHub Repository](https://github.com/Cloud-CV/EvalAI-Starter/issues) or contact us at [email protected] if you have issues. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
{ | ||
"foo": "bar" | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
{ | ||
"foo": "bar" | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,145 @@ | ||
# If you are not sure what all these fields mean, please refer our documentation here: | ||
# https://evalai.readthedocs.io/en/latest/configuration.html | ||
title: Random Number Generator Challenge | ||
short_description: Random number generation challenge for each submission | ||
description: templates/description.html | ||
evaluation_details: templates/evaluation_details.html | ||
terms_and_conditions: templates/terms_and_conditions.html | ||
image: logo.jpg | ||
submission_guidelines: templates/submission_guidelines.html | ||
leaderboard_description: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras egestas a libero nec sagittis. | ||
evaluation_script: evaluation_script.zip | ||
remote_evaluation: False | ||
is_docker_based: False | ||
start_date: 2019-01-01 00:00:00 | ||
end_date: 2099-05-31 23:59:59 | ||
published: True | ||
|
||
leaderboard: | ||
- id: 1 | ||
schema: | ||
{ | ||
"labels": ["Metric1", "Metric2", "Metric3", "Total"], | ||
"default_order_by": "Total", | ||
"metadata": { | ||
"Metric1": { | ||
"sort_ascending": True, | ||
"description": "Lorem ipsum dolor sit amet, consectetur adipiscing elit.", | ||
}, | ||
"Metric2": { | ||
"sort_ascending": True, | ||
"description": "Lorem ipsum dolor sit amet, consectetur adipiscing elit.", | ||
} | ||
} | ||
} | ||
|
||
challenge_phases: | ||
- id: 1 | ||
name: Dev Phase | ||
description: templates/challenge_phase_1_description.html | ||
leaderboard_public: False | ||
is_public: True | ||
is_submission_public: True | ||
start_date: 2019-01-19 00:00:00 | ||
end_date: 2099-04-25 23:59:59 | ||
test_annotation_file: annotations/test_annotations_devsplit.json | ||
codename: dev | ||
max_submissions_per_day: 5 | ||
max_submissions_per_month: 50 | ||
max_submissions: 50 | ||
default_submission_meta_attributes: | ||
- name: method_name | ||
is_visible: True | ||
- name: method_description | ||
is_visible: True | ||
- name: project_url | ||
is_visible: True | ||
- name: publication_url | ||
is_visible: True | ||
submission_meta_attributes: | ||
- name: TextAttribute | ||
description: Sample | ||
type: text | ||
required: False | ||
- name: SingleOptionAttribute | ||
description: Sample | ||
type: radio | ||
options: ["A", "B", "C"] | ||
- name: MultipleChoiceAttribute | ||
description: Sample | ||
type: checkbox | ||
options: ["alpha", "beta", "gamma"] | ||
- name: TrueFalseField | ||
description: Sample | ||
type: boolean | ||
required: True | ||
is_restricted_to_select_one_submission: False | ||
is_partial_submission_evaluation_enabled: False | ||
allowed_submission_file_types: ".json, .zip, .txt, .tsv, .gz, .csv, .h5, .npy, .npz" | ||
- id: 2 | ||
name: Test Phase | ||
description: templates/challenge_phase_2_description.html | ||
leaderboard_public: True | ||
is_public: True | ||
is_submission_public: True | ||
start_date: 2019-01-01 00:00:00 | ||
end_date: 2099-05-24 23:59:59 | ||
test_annotation_file: annotations/test_annotations_testsplit.json | ||
codename: test | ||
max_submissions_per_day: 5 | ||
max_submissions_per_month: 50 | ||
max_submissions: 50 | ||
default_submission_meta_attributes: | ||
- name: method_name | ||
is_visible: True | ||
- name: method_description | ||
is_visible: True | ||
- name: project_url | ||
is_visible: True | ||
- name: publication_url | ||
is_visible: True | ||
submission_meta_attributes: | ||
- name: TextAttribute | ||
description: Sample | ||
type: text | ||
- name: SingleOptionAttribute | ||
description: Sample | ||
type: radio | ||
options: ["A", "B", "C"] | ||
- name: MultipleChoiceAttribute | ||
description: Sample | ||
type: checkbox | ||
options: ["alpha", "beta", "gamma"] | ||
- name: TrueFalseField | ||
description: Sample | ||
type: boolean | ||
is_restricted_to_select_one_submission: False | ||
is_partial_submission_evaluation_enabled: False | ||
|
||
dataset_splits: | ||
- id: 1 | ||
name: Train Split | ||
codename: train_split | ||
- id: 2 | ||
name: Test Split | ||
codename: test_split | ||
|
||
challenge_phase_splits: | ||
- challenge_phase_id: 1 | ||
leaderboard_id: 1 | ||
dataset_split_id: 1 | ||
visibility: 1 | ||
leaderboard_decimal_precision: 2 | ||
is_leaderboard_order_descending: True | ||
- challenge_phase_id: 2 | ||
leaderboard_id: 1 | ||
dataset_split_id: 1 | ||
visibility: 3 | ||
leaderboard_decimal_precision: 2 | ||
is_leaderboard_order_descending: True | ||
- challenge_phase_id: 2 | ||
leaderboard_id: 1 | ||
dataset_split_id: 2 | ||
visibility: 1 | ||
leaderboard_decimal_precision: 2 | ||
is_leaderboard_order_descending: True |
Empty file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
from .main import evaluate |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,83 @@ | ||
import random | ||
|
||
|
||
def evaluate(test_annotation_file, user_submission_file, phase_codename, **kwargs): | ||
print("Starting Evaluation.....") | ||
print("Submission related metadata:") | ||
""" | ||
Evaluates the submission for a particular challenge phase adn returns score | ||
Arguments: | ||
`test_annotations_file`: Path to test_annotation_file on the server | ||
`user_submission_file`: Path to file submitted by the user | ||
`phase_codename`: Phase to which submission is made | ||
`**kwargs`: keyword arguments that contains additional submission | ||
metadata that challenge hosts can use to send slack notification. | ||
You can access the submission metadata | ||
with kwargs['submission_metadata'] | ||
Example: A sample submission metadata can be accessed like this: | ||
>>> print(kwargs['submission_metadata']) | ||
{ | ||
"status": u"running", | ||
"when_made_public": None, | ||
"participant_team": 5, | ||
"input_file": "https://abc.xyz/path/to/submission/file.json", | ||
"execution_time": u"123", | ||
"publication_url": u"ABC", | ||
"challenge_phase": 1, | ||
"created_by": u"ABC", | ||
"stdout_file": "https://abc.xyz/path/to/stdout/file.json", | ||
"method_name": u"Test", | ||
"stderr_file": "https://abc.xyz/path/to/stderr/file.json", | ||
"participant_team_name": u"Test Team", | ||
"project_url": u"http://foo.bar", | ||
"method_description": u"ABC", | ||
"is_public": False, | ||
"submission_result_file": "https://abc.xyz/path/result/file.json", | ||
"id": 123, | ||
"submitted_at": u"2017-03-20T19:22:03.880652Z", | ||
} | ||
""" | ||
print(kwargs["submission_metadata"]) | ||
output = {} | ||
if phase_codename == "dev": | ||
print("Evaluating for Dev Phase") | ||
output["result"] = [ | ||
{ | ||
"train_split": { | ||
"Metric1": random.randint(0, 99), | ||
"Metric2": random.randint(0, 99), | ||
"Metric3": random.randint(0, 99), | ||
"Total": random.randint(0, 99), | ||
} | ||
} | ||
] | ||
# To display the results in the result file | ||
output["submission_result"] = output["result"][0]["train_split"] | ||
print("Completed evaluation for Dev Phase") | ||
elif phase_codename == "test": | ||
print("Evaluating for Test Phase") | ||
output["result"] = [ | ||
{ | ||
"train_split": { | ||
"Metric1": random.randint(0, 99), | ||
"Metric2": random.randint(0, 99), | ||
"Metric3": random.randint(0, 99), | ||
"Total": random.randint(0, 99), | ||
} | ||
}, | ||
{ | ||
"test_split": { | ||
"Metric1": random.randint(0, 99), | ||
"Metric2": random.randint(0, 99), | ||
"Metric3": random.randint(0, 99), | ||
"Total": random.randint(0, 99), | ||
} | ||
}, | ||
] | ||
# To display the results in the result file | ||
output["submission_result"] = output["result"][0] | ||
print("Completed evaluation for Test Phase") | ||
return output |
Oops, something went wrong.