Skip to content

Commit

Permalink
fix: improvements following review
Browse files Browse the repository at this point in the history
* improved description of limitations due to HPA's design
* highlight importance of the `metrics-relist-interval` setting
* simplify config example to no longer use regex metric matches
* clarify example using HPA label selectors
  • Loading branch information
lc525 committed Dec 3, 2024
1 parent ad54205 commit d8c818b
Showing 1 changed file with 59 additions and 22 deletions.
81 changes: 59 additions & 22 deletions docs-gb/kubernetes/hpa-rps-autoscaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,24 @@ and servers (single-model serving). This will require:

{% hint style="warning" %}
The Core 2 HPA-based autoscaling has the following constraints/limitations:

- HPA-scaling only works for single-model serving (1-1 correspondence between models and servers). Multi-model serving autoscaling is supported via the existing features described [here](autoscaling.md). Those continue to be improved targeting seamless autoscaling of a wider set of models and workloads.

- **Only custom metrics** coming from Prometheus are supported; In particular, native k8s resource metrics such as CPU or memory will not work. This is because of a limitation introduced by HPA which does not allow scaling of both Models and Servers based on metrics gathered from the same set of pods (one HPA manifest needs to "own" those pods).

- K8s clusters only allow for one provider of custom metrics to be installed at a time (prometheus-adapter in Seldon's case). The K8s community is looking into ways of removing this limitation.
* HPA scaling only targets single-model serving, where there is a 1:1 correspondence between
models and servers. Autoscaling for multi-model serving (MMS) is supported for specific models
and workloads via the Core 2 native features described [here](autoscaling.md).
Significant improvements to MMS autoscaling are planned for future releases.
* **Only custom metrics** from Prometheus are supported. Native Kubernetes
resource metrics such as CPU or memory are not. This limitation exists because of HPA's
design: In order to prevent multiple HPA CRs from issuing conflicting scaling instructions,
each HPA CR must exclusively control a set of pods which is disjoint from the pods
controlled by other HPA CRs. In Seldon Core 2, CPU/memory metrics can be used to scale the
number of Server replicas via HPA. However, this also means that the CPU/memory metrics
from the same set of pods can no longer be used to scale the number of model replicas. We
are working on improvements in Core 2 to allow both servers and models to be scaled based on
a single HPA manifest, targeting the Model CR.
* Each Kubernetes cluster supports only one active custom metrics provider. If your cluster
already uses a custom metrics provider different from `prometheus-adapter`, it
will need to be removed before being able to scale Core 2 models and servers via HPA. The
Kubernetes is actively exploring solutions for allowing multiple custom metrics providers to
coexist.
{% endhint %}

## Installing and configuring the Prometheus Adapter
Expand All @@ -46,6 +58,14 @@ If you are running Prometheus on a different port than the default 9090, you can
prometheus.port=[custom_port]` You may inspect all the options available as helm values by
running `helm show values prometheus-community/prometheus-adapter`

{% hint style="warning" %}
Please check that the `metricsRelistInterval` helm value (default to 1m) works well in your
setup, and update it otherwise. This value needs to be larger than or equal to your Prometheus
scrape interval. The corresponding prometheus adapter command-line argument is
`--metrics-relist-interval`. If the relist interval is set incorrectly, it will lead to some of
the custom metrics being intermittently reported as missing.
{% endhint %}

We now need to configure the adapter to look for the correct prometheus metrics and compute
per-model RPS values. On install, the adapter has created a `ConfigMap` in the same namespace as
itself, named `[helm_release_name]-prometheus-adapter`. In our case, it will be
Expand All @@ -70,19 +90,16 @@ data:
"rules":
-
"seriesQuery": |
{__name__=~"^seldon_model.*_total",namespace!=""}
"seriesFilters":
- "isNot": "^seldon_.*_seconds_total"
- "isNot": "^seldon_.*_aggregate_.*"
{__name__="seldon_model_infer_total",namespace!=""}
"resources":
"overrides":
"model": {group: "mlops.seldon.io", resource: "model"}
"server": {group: "mlops.seldon.io", resource: "server"}
"pod": {resource: "pod"}
"namespace": {resource: "namespace"}
"name":
"matches": "^seldon_model_(.*)_total"
"as": "${1}_rps"
"matches": "seldon_model_infer_total"
"as": "infer_rps"
"metricsQuery": |
sum by (<<.GroupBy>>) (
rate (
Expand All @@ -106,10 +123,20 @@ The rule definition can be broken down in four parts:
* _Discovery_ (the `seriesQuery` and `seriesFilters` keys) controls what Prometheus
metrics are considered for exposure via the k8s custom metrics API.

In the example, all the Seldon Prometheus metrics of the form `seldon_model_*_total` are
considered, excluding metrics pre-aggregated across all models (`.*_aggregate_.*`) as well as
the cummulative infer time per model (`.*_seconds_total`). For RPS, we are only interested in
the model inference count (`seldon_model_infer_total`)
As an alternative to the example above, all the Seldon Prometheus metrics of the form `seldon_model.*_total`
could be considered, followed by excluding metrics pre-aggregated across all models (`.*_aggregate_.*`) as well as
the cummulative infer time per model (`.*_seconds_total`):

```yaml
"seriesQuery": |
{__name__=~"^seldon_model.*_total",namespace!=""}
"seriesFilters":
- "isNot": "^seldon_.*_seconds_total"
- "isNot": "^seldon_.*_aggregate_.*"
...
```

For RPS, we are only interested in the model inference count (`seldon_model_infer_total`)

* _Association_ (the `resources` key) controls the Kubernetes resources that a particular
metric can be attached to or aggregated over.
Expand All @@ -135,8 +162,14 @@ The rule definition can be broken down in four parts:
`seldon_model_infer_total` and expose custom metric endpoints named `infer_rps`, which when
called return the result of a query over the Prometheus metric.

The matching over the Prometheus metric name uses regex group capture expressions (line 22),
which are then be referenced in the custom metric name (line 23).
Instead of a literal match, one could also use regex group capture expressions,
which can then be referenced in the custom metric name:

```yaml
"name":
"matches": "^seldon_model_(.*)_total"
"as": "${1}_rps"
```

* _Querying_ (the `metricsQuery` key) defines how a request for a specific k8s custom metric gets
converted into a Prometheus query.
Expand Down Expand Up @@ -431,7 +464,8 @@ inspecting the corresponding Server HPA CR, or by fetching the metric directly v

* Filtering metrics by additional labels on the prometheus metric:

The prometheus metric from which the model RPS is computed has the following labels:
The prometheus metric from which the model RPS is computed has the following labels managed
by Seldon Core 2:

```c-like
seldon_model_infer_total{
Expand All @@ -450,9 +484,11 @@ inspecting the corresponding Server HPA CR, or by fetching the metric directly v
}
```

If you want the scaling metric to be computed based on inferences with a particular value
for any of those labels, you can add this in the HPA metric config, as in the example
(targeting `method_type="rest"`):
If you want the scaling metric to be computed based on a subset of the Prometheus time
series with particular label values (labels either managed by Seldon Core 2 or added
automatically within your infrastructure), you can add this as a selector the HPA metric
config. This is shown in the following example, which scales only based on the RPS of REST
requests as opposed to REST + gRPC:

```yaml
metrics:
Expand All @@ -471,6 +507,7 @@ inspecting the corresponding Server HPA CR, or by fetching the metric directly v
type: AverageValue
averageValue: "3"
```

* Customize scale-up / scale-down rate & properties by using scaling policies as described in
the [HPA scaling policies docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#configurable-scaling-behavior)

Expand Down

0 comments on commit d8c818b

Please sign in to comment.