Skip to content

Latest commit

 

History

History
191 lines (128 loc) · 4.31 KB

DEVELOP.rst

File metadata and controls

191 lines (128 loc) · 4.31 KB

Run locally

  1. Install Python 3.8+, then create and activate a virtual environment.

    $ python -m venv my_venv
    $ source my_venv/bin/activate
    

    The steps below assume your virtual environment is activated. To deactivate the venv, just run deactivate at any time. Read the full Virtual Environment docs for details.

  2. Install Python dependencies

    pip install -r requirements.txt
    
  3. Install Node.js v14.16.0+.

  4. Install Node.js dependencies

    npm install
    
  5. Install the following toolchains:

  6. Install Docker and docker-compose

    Note: On macOS, Docker containers are run inside a virtual machine. This incurs significant overhead and can skew results unpredictably.

  7. Install Synth

  8. Configure simulated latency. The instructions for this vary by operating system. On Linux, this can be achieved with tc:

    sudo tc qdisc add dev br-webapp-bench root netem delay 1ms
    
  9. Generate the dataset.

    $ make new-dataset
    
  10. Load the data into the test databases via $ make load. Alternatively, you can run only the loaders you care about:

    $ make load-django
    $ make load-edgedb
    $ make load-hasura
    $ make load-mongodb
    $ make load-postgres
    $ make load-prisma
    $ make load-sequelize
    $ make load-sqlalchemy
    $ make load-typeorm
    
  11. Compile runner files (Go, TypeScript): ``$ make compile`

  12. Run the JavaScript benchmarks

    First, run the following loaders:

    $ make load-typeorm
    $ make load-sequelize
    $ make load-postgres
    $ make load-prisma
    $ make load-edgedb
    

    Then run the benchmarks:

    $ make run-js
    

    The results will be generated into docs/js.html.

  13. Run the Python benchmarks

    First, run the following loaders:

    $ make load-postgres
    $ make load-django
    $ make load-sqlalchemy
    $ make load-edgedb
    

    Then run the benchmarks:

    $ make run-py
    

    The results will be generated into docs/py.html.

  14. Run the SQL benchmarks

    First, run the following loaders:

    $ make load-postgres
    $ make load-edgedb
    

    Then run the benchmarks:

    $ make run-sql
    

    The results will be generated into docs/sql.html.

  15. [Optional] Run a custom benchmark

    The benchmarking system can be customized by directly running python bench.py.

    python bench.py
      --html <path/to/file>
      --json <path/to/file>
      --concurrency <seconds>
      --query <query_name>
      [targets]
    

    The query_name must be one of the folowing options. To pick multiple queries, you can use the --query flag multiple times.

    • get_movie
    • get_person
    • get_user
    • update_movie
    • insert_user
    • insert_movie
    • insert_movie_plus

    Specify a custom set of targets with a space-separated list of the following options:

    • typeorm
    • sequelize
    • prisma
    • edgedb_js_qb
    • django
    • django_restfw
    • mongodb
    • sqlalchemy
    • edgedb_py_sync
    • edgedb_py_json
    • edgedb_py_json_async
    • edgedb_go
    • edgedb_go_json
    • edgedb_go_graphql
    • edgedb_go_http
    • edgedb_js
    • edgedb_js_json
    • postgres_asyncpg
    • postgres_psycopg
    • postgres_pq
    • postgres_pgx
    • postgres_pg
    • postgres_hasura_go

    You can see a full list of options like so:

    python bench.py --help