From 900a0352d86d5f5e6890041516a2f2f9e20e89c0 Mon Sep 17 00:00:00 2001 From: Piotr <17101802+thampiotr@users.noreply.github.com> Date: Thu, 4 Jan 2024 14:59:45 +0100 Subject: [PATCH] Fix incorrect CPU calculation (#6047) --- docs/sources/flow/monitoring/agent-resource-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/flow/monitoring/agent-resource-usage.md b/docs/sources/flow/monitoring/agent-resource-usage.md index f7a44c638555..21b16106d5a3 100644 --- a/docs/sources/flow/monitoring/agent-resource-usage.md +++ b/docs/sources/flow/monitoring/agent-resource-usage.md @@ -33,7 +33,7 @@ series that need to be scraped and the scrape interval. As a rule of thumb, **per each 1 million active series** and with the default scrape interval, you can expect to use approximately: -* 1.5 CPU cores +* 0.4 CPU cores * 11 GiB of memory * 1.5 MiB/s of total network bandwidth, send and receive