You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the setup uses fake webserver as the metrics emitter which only supports standard metrics (e.g. the debug/* metrics and the process and go metrics). Having variable/configurable metrics would help collect good benchmarking data. Hence, I suggest replacing the fake webserver component with avalanche.
Avalanche provides several configurable options to change the configurations (e.g. metrics count, label count, series count, metric name length, etc.). Although, this may not represent production-like metrics (for e.g. metric names are like avalanche_metric_mmmmm_01 does not look like a real workload metric), having different configurations can help collect good data to benchmark and analyze the Prometheus's performance.
Currently, the setup uses fake webserver as the metrics emitter which only supports standard metrics (e.g. the
debug/*
metrics and theprocess
andgo
metrics). Having variable/configurable metrics would help collect good benchmarking data. Hence, I suggest replacing the fake webserver component with avalanche.Avalanche provides several configurable options to change the configurations (e.g. metrics count, label count, series count, metric name length, etc.). Although, this may not represent production-like metrics (for e.g. metric names are like
avalanche_metric_mmmmm_01
does not look like a real workload metric), having different configurations can help collect good data to benchmark and analyze the Prometheus's performance.Sample / Reference Commit: https://github.com/saketjajoo/test-infra/commit/eb8678bdfa2f5404bbf3997103f20cfb18ca6877
The text was updated successfully, but these errors were encountered: