Skip to content

Commit

Permalink
A benchmark run on E2 (2 vCPU, 8GB RAM) with 10 min TO (#81)
Browse files Browse the repository at this point in the history
  • Loading branch information
siddharth-krishna authored Dec 11, 2024
1 parent c818986 commit 1c5b0a3
Show file tree
Hide file tree
Showing 4 changed files with 1,958 additions and 1,943 deletions.
63 changes: 39 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Preferred use:
- python: 3.12.4
- pip: 24.1.2

We use Python virtual environments to manage the dependencies for each component of this project. This is how to create a virtual environment:
We use Python virtual environments to manage the dependencies for the website. This is how to create a virtual environment:
```shell
python -m venv venv
```
Expand All @@ -29,7 +29,7 @@ And this is how to install the required dependencies once a `venv` is activated:
pip install -r website/requirements.txt
```

We also use the `conda` package manager to manage different solver versions, so please make sure it is installed before running the benchmark runner.
We also use the `conda` package manager to run benchmarks using different solver versions, so please make sure it is installed before running the benchmark runner.

### Development

Expand All @@ -46,31 +46,46 @@ If you want to skip these pre-commit steps for a particular commit, you can run:
git commit --no-verify
```

## Run Project
## Generating / Fetching Benchmarks

1. **Run Benchmark Runner**
The benchmark runner script creates conda environments containing the solvers and other necessary pre-requisites, so a virtual environment is not necessary.
```shell
./runner/benchmark_all.sh ./benchmarks/benchmark_config.yaml
```
The script will save the measured runtime and memory consumption into a CSV file in `results/` that the website will then read and display.
The script has other options that you can see with the `-h` flag.
1. The PyPSA benchmarks in `benchmarks/pypsa/` can be generated by using the Dockerfile present in that directory. Please see the [instructions](benchmarks/pypsa/README.md) for more details.

*Note: If you encounter a "permission denied" error, make sure to set the script as executable by running:*
```shell
chmod +x ./runner/benchmark_all.sh
```
1. The JuMP-HiGHS benchmarks in `benchmarks/jump_highs_platform/` contain only the metadata for the benchmarks that are present in https://github.com/jump-dev/open-energy-modeling-benchmarks/tree/main/instances. These are fetched automatically by the benchmark runner from GitHub.

1. **Run Website**
Remember to activate the virtual environment containing the website's requirements, and then run:
```shell
streamlit run website/app.py
```
The website will be running on: [http://localhost:8501](http://localhost:8501)

1. **Merge Metadata**
Run the script to generate a unified metadata.yaml file by executing:
1. The metadata of all benchmarks under `benchmarks/` are collected by the following script to generate a unified `results/metadata.yaml` file, when run as follows:
```shell
python benchmarks/merge_metadata.py
```
This will parse all metadata*.yaml files under benchmarks/ and create results/metadata.yaml, containing metadata for all benchmarks.

1. The file `benchmarks/benchmark_config.yaml` specifies the names, sizes (instances), and URLs of the LP/MPS files for each benchmark. This is used by the benchmark runner.

## Running Benchmarks

The benchmark runner script creates conda environments containing the solvers and other necessary pre-requisites, so a virtual environment is not necessary.
```shell
./runner/benchmark_all.sh ./benchmarks/benchmark_config.yaml
```
The script will save the measured runtime and memory consumption into a CSV file in `results/` that the website will then read and display.
The script has options, e.g. to run only particular years, that you can see with the `-h` flag:
```
Usage: ./runner/benchmark_all.sh [-a] [-y "<space separated years>"] <benchmarks yaml file>
Runs the solvers from the specified years (default all) on the benchmarks in the given file
Options:
-a Append to the results CSV file instead of overwriting. Default: overwrite
-y A space separated string of years to run. Default: 2020 2021 2022 2023 2024
```

The `benchmark_all.sh` script activates the appropriate conda environment and then calls `python runner/run_benchmarks.py`.
This script can also be called directly, if required, but you must be in a conda environment that contains the solvers you want to benchmark.
For example:
```shell
python runner/run_benchmarks.py benchmarks/benchmark_config.yaml 2024
```

## Running the Website

Remember to activate the virtual environment containing the website's requirements, and then run:
```shell
streamlit run website/app.py
```
The website will be running on: [http://localhost:8501](http://localhost:8501)
Loading

0 comments on commit 1c5b0a3

Please sign in to comment.