Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move px-bench-env.yml back up to root of repository, update README.md, and clean up examples structure #15

Merged
merged 5 commits into from
Nov 30, 2023
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 3 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,14 @@ docker push ...
```
* Create a namespace for your benchmarking, and set your context to it (or ensure that you are applying all YAML below to that namespace)
* TBD: Create a wrapper script that will create the namespace and apply all YAML in order and with correct timing!
* Copy an example `px-bench-env.yml` from the examples directory. There are various examples for different clound enviroments.
* Create the px-bench namespace with `kubectl create ns px-bench`
* Edit `px-bench-env.yml` to set the ConfigMap `env` to set desired values. If necessary, update `image:` to reflect the image you built.
* `kubectl -n px-bench apply -f px-bench-env.yml` to start apply the configuration settings.
* Edit `px-bench-env.yml` to set the ConfigMap `env` to set desired values. If necessary, update `image:` to reflect the image you built. NOTE: SET YOUR STORAGECLASSES IN THIS FILE!
* Do NOT edit `px-bench-main.yml` (unless you are attempting to change the behavior of the benchmark!)
* In order to consume most of the available RAM so it is not used for buffering, run `kubectl -n px-bench apply -f chewram.yml`.
* Wait for `kubectl -n px-bench get pod -n chewram` for all the pods to show as `1/1 Running`.
* `kubectl -n px-bench apply -f px-bench-env.yml` to apply the configuration settings.
* `kubectl -n px-bench apply -f px-bench.yml` to start the run.
* Monitor its progress with `kubectl logs -n px-bench -l px-bench=fio -f`. With the defaults, runtime is expected to be around 15 minutes.
* Monitor progress with `kubectl logs -n px-bench -l px-bench=fio -f`. With the defaults, runtime is expected to be around 15 minutes.
* Wait for `kubectl get pod -n px-bench` for all the pods to show as Completed.

This will iterate through the combinations of `blocksize_list`, `readwrite_list`, and `storageclass_list` set in the ConfigMap, and runs those as independent `fio` jobs. Output will go the ConfigMap `fio-output`. Configurations will go to the ConfigMap `fio-config`.
Expand Down
10 changes: 10 additions & 0 deletions examples/aws/README_AWS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# README for AWS
## Suggestions and breadcrumbs for testing on AWS

See https://docs.aws.amazon.com/eks/latest/userguide/storage.html for links to various drivers available in EKS.

EBS: Since Portworx would consume EBS volumes as Portworx Clouddrives, attempting "apples to apples" comparisons between AWS native storage and Portworx should be done with EBS volumes that are the same size as the Portworx clouddrive used for the pool.

Installation of the EBS CSI is documented here: https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html

Update `px-bench-env.yml` to your desired AWS storageclass. For example, benchmark the EBS native storageclass "ebs-csi" vs. the Portworx storageclass "px-csi-db"
61 changes: 0 additions & 61 deletions examples/aws/px-bench-env.yml

This file was deleted.

File renamed without changes.