From 57ff152c8882914e88a7dced7370d43990d334fd Mon Sep 17 00:00:00 2001 From: Piotr Gwizdala <17101802+thampiotr@users.noreply.github.com> Date: Thu, 4 Jan 2024 14:50:02 +0100 Subject: [PATCH] Fix incorrect CPU calculation --- docs/sources/flow/monitoring/agent-resource-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/flow/monitoring/agent-resource-usage.md b/docs/sources/flow/monitoring/agent-resource-usage.md index f7a44c638555..21b16106d5a3 100644 --- a/docs/sources/flow/monitoring/agent-resource-usage.md +++ b/docs/sources/flow/monitoring/agent-resource-usage.md @@ -33,7 +33,7 @@ series that need to be scraped and the scrape interval. As a rule of thumb, **per each 1 million active series** and with the default scrape interval, you can expect to use approximately: -* 1.5 CPU cores +* 0.4 CPU cores * 11 GiB of memory * 1.5 MiB/s of total network bandwidth, send and receive