diff --git a/docs/sources/data-collection.md b/docs/sources/data-collection.md
index da008ce32059..80fbd874cdcf 100644
--- a/docs/sources/data-collection.md
+++ b/docs/sources/data-collection.md
@@ -12,7 +12,7 @@ title: Grafana Agent data collection
weight: 500
---
-# Data collection
+# Grafana Agent Data collection
By default, Grafana Agent sends anonymous but uniquely identifiable usage information from
your Grafana Agent instance to Grafana Labs. These statistics are sent to `stats.grafana.org`.
diff --git a/docs/sources/flow/_index.md b/docs/sources/flow/_index.md
index aa434950db5c..67db78b24cb7 100644
--- a/docs/sources/flow/_index.md
+++ b/docs/sources/flow/_index.md
@@ -9,12 +9,14 @@ description: Grafana Agent Flow is a component-based revision of Grafana Agent w
a focus on ease-of-use, debuggability, and adaptability
title: Flow mode
weight: 400
+cascade:
+ PRODUCT_NAME: Grafana Agent Flow
+ PRODUCT_ROOT_NAME: Grafana Agent
---
-# Flow mode
+# {{< param "PRODUCT_NAME" >}}
-The Flow mode of Grafana Agent (also called Grafana Agent Flow) is a
-_component-based_ revision of Grafana Agent with a focus on ease-of-use,
+{{< param "PRODUCT_NAME" >}} is a _component-based_ revision of {{< param "PRODUCT_ROOT_NAME" >}} with a focus on ease-of-use,
debuggability, and ability to adapt to the needs of power users.
Components allow for reusability, composability, and focus on a single task.
@@ -34,7 +36,7 @@ Components allow for reusability, composability, and focus on a single task.
## Example
```river
-// Discover Kubernetes pods to collect metrics from.
+// Discover Kubernetes pods to collect metrics from
discovery.kubernetes "pods" {
role = "pod"
}
@@ -65,19 +67,20 @@ prometheus.remote_write "default" {
}
```
-## Grafana Agent configuration generator
+## {{< param "PRODUCT_ROOT_NAME" >}} configuration generator
+
+The {{< param "PRODUCT_ROOT_NAME" >}} [configuration generator](https://grafana.github.io/agent-configurator/) will help you get a head start on creating flow code.
-The [Grafana Agent configuration generator](https://grafana.github.io/agent-configurator/) will help you get a head start on creating flow code.
{{% admonition type="note" %}}
-This feature is experimental, and it does not support all River components.
+This feature is experimental, and it doesn't support all River components.
{{% /admonition %}}
## Next steps
-* [Install][] Grafana Agent in flow mode.
-* Learn about the core [Concepts][] of flow mode.
-* Follow our [Getting started][] guides for Grafana Agent in flow mode.
-* Follow our [Tutorials][] to get started with Grafana Agent in flow mode.
+* [Install][] {{< param "PRODUCT_NAME" >}}.
+* Learn about the core [Concepts][] of {{< param "PRODUCT_NAME" >}}.
+* Follow our [Getting started][] guides for {{< param "PRODUCT_NAME" >}}.
+* Follow our [Tutorials][] to get started with {{< param "PRODUCT_NAME" >}}.
* Learn how to use the [Configuration language][].
* Check out our [Reference][] documentation to find specific information you
might be looking for.
diff --git a/docs/sources/flow/concepts/_index.md b/docs/sources/flow/concepts/_index.md
index 5e6174c09845..da8e2ddf57da 100644
--- a/docs/sources/flow/concepts/_index.md
+++ b/docs/sources/flow/concepts/_index.md
@@ -6,13 +6,13 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/
- /docs/grafana-cloud/send-data/agent/flow/concepts/
canonical: https://grafana.com/docs/agent/latest/flow/concepts/
-description: Learn about the Grafana Agent flow mode concepts
+description: Learn about the Grafana Agent Flow concepts
title: Concepts
weight: 100
---
# Concepts
-This section explains primary concepts of Grafana Agent Flow.
+This section explains primary concepts of {{< param "PRODUCT_NAME" >}}.
{{< section >}}
diff --git a/docs/sources/flow/concepts/clustering.md b/docs/sources/flow/concepts/clustering.md
index 2c6514c18841..0dd2139462d1 100644
--- a/docs/sources/flow/concepts/clustering.md
+++ b/docs/sources/flow/concepts/clustering.md
@@ -15,15 +15,15 @@ weight: 500
# Clustering (beta)
-Clustering enables a fleet of agents to work together for workload distribution
+Clustering enables a fleet of {{< param "PRODUCT_ROOT_NAME" >}}s to work together for workload distribution
and high availability. It helps create horizontally scalable deployments with
minimal resource and operational overhead.
-To achieve this, Grafana Agent makes use of an eventually consistent model that
-assumes all participating Agents are interchangeable and converge on using the
+To achieve this, {{< param "PRODUCT_NAME" >}} makes use of an eventually consistent model that
+assumes all participating {{< param "PRODUCT_ROOT_NAME" >}}s are interchangeable and converge on using the
same configuration file.
-The behavior of a standalone, non-clustered agent is the same as if it was a
+The behavior of a standalone, non-clustered {{< param "PRODUCT_ROOT_NAME" >}} is the same as if it was a
single-node cluster.
You configure clustering by passing `cluster` command-line flags to the [run][]
@@ -35,7 +35,7 @@ command.
Target auto-distribution is the most basic use case of clustering; it allows
scraping components running on all peers to distribute scrape load between
-themselves. For target auto-distribution to work correctly, all agents in the
+themselves. For target auto-distribution to work correctly, all {{< param "PRODUCT_ROOT_NAME" >}} in the
same cluster must be able to reach the same service discovery APIs and must be
able to scrape the same targets.
@@ -53,13 +53,13 @@ prometheus.scrape "default" {
```
A cluster state change is detected when a new node joins or an existing node goes away. All participating components locally
-recalculate target ownership and rebalance the number of targets they’re
+recalculate target ownership and re-balance the number of targets they’re
scraping without explicitly communicating ownership over the network.
-Target auto-distribution allows you to dynamically scale the number of agents to distribute workload during peaks.
+Target auto-distribution allows you to dynamically scale the number of {{< param "PRODUCT_ROOT_NAME" >}}s to distribute workload during peaks.
It also provides resiliency because targets are automatically picked up by one of the node peers if a node goes away.
-Grafana Agent uses a fully-local consistent hashing algorithm to distribute
+{{< param "PRODUCT_NAME" >}} uses a fully-local consistent hashing algorithm to distribute
targets, meaning that, on average, only ~1/N of the targets are redistributed.
Refer to component reference documentation to discover whether it supports
@@ -72,7 +72,7 @@ clustering, such as:
## Cluster monitoring and troubleshooting
-To monitor your cluster status, you can check the Flow UI [clustering page][].
+To monitor your cluster status, you can check the {{< param "PRODUCT_NAME" >}} UI [clustering page][].
The [debugging][] topic contains some clues to help pin down probable
clustering issues.
diff --git a/docs/sources/flow/concepts/component_controller.md b/docs/sources/flow/concepts/component_controller.md
index 362bf9c1838a..e3896050c6c6 100644
--- a/docs/sources/flow/concepts/component_controller.md
+++ b/docs/sources/flow/concepts/component_controller.md
@@ -13,7 +13,7 @@ weight: 200
# Component controller
-The _component controller_ is the core part of Grafana Agent Flow which manages
+The _component controller_ is the core part of {{< param "PRODUCT_NAME" >}} which manages
components at runtime.
The component controller is responsible for:
@@ -29,8 +29,8 @@ As discussed in [Components][], a relationship between components is created
when an expression is used to set the argument of one component to an exported
field of another component.
-The set of all components and the relationships between them define a [directed
-acyclic graph][DAG] (DAG), which informs the component controller which
+The set of all components and the relationships between them define a [Directed
+Acyclic Graph][DAG] (DAG), which informs the component controller which
references are valid and in what order components must be evaluated.
For a configuration file to be valid, components must not reference themselves or
diff --git a/docs/sources/flow/concepts/components.md b/docs/sources/flow/concepts/components.md
index b66416d9d62c..223d76b4f4f8 100644
--- a/docs/sources/flow/concepts/components.md
+++ b/docs/sources/flow/concepts/components.md
@@ -13,7 +13,7 @@ weight: 100
# Components
-_Components_ are the building blocks of Grafana Agent Flow. Each component is
+_Components_ are the building blocks of {{< param "PRODUCT_NAME" >}}. Each component is
responsible for handling a single task, such as retrieving secrets or
collecting Prometheus metrics.
@@ -50,7 +50,7 @@ discovery.kubernetes "nodes" {
## Pipelines
-Most arguments for a component in a config file are constant values, such
+Most arguments for a component in a configuration file are constant values, such
setting a `log_level` attribute to the quoted string `"debug"`:
```river
@@ -87,7 +87,7 @@ An example pipeline may look like this:
-The following config file represents the above pipeline:
+The following configuration file represents the pipeline:
```river
// Get our API key from disk.
diff --git a/docs/sources/flow/concepts/configuration_language.md b/docs/sources/flow/concepts/configuration_language.md
index 849951572eb1..bdd2d829b854 100644
--- a/docs/sources/flow/concepts/configuration_language.md
+++ b/docs/sources/flow/concepts/configuration_language.md
@@ -13,7 +13,7 @@ weight: 400
# Configuration language concepts
-The Grafana Agent Flow _configuration language_ refers to the language used in
+The {{< param "PRODUCT_NAME" >}} _configuration language_ refers to the language used in
configuration files which define and configure components to run.
The configuration language is called River, a Terraform/HCL-inspired language:
@@ -99,7 +99,7 @@ This file has two blocks:
## More information
River is documented in detail in [Configuration language][config-docs] section
-of the Grafana Agent Flow docs.
+of the {{< param "PRODUCT_NAME" >}} docs.
{{% docs/reference %}}
[config-docs]: "/docs/agent/ -> /docs/agent//flow/config-language"
diff --git a/docs/sources/flow/concepts/modules.md b/docs/sources/flow/concepts/modules.md
index ace8d7993731..2e5383fb6430 100644
--- a/docs/sources/flow/concepts/modules.md
+++ b/docs/sources/flow/concepts/modules.md
@@ -13,17 +13,17 @@ weight: 300
# Modules
-_Modules_ are a way to create Grafana Agent Flow configurations which can be
+_Modules_ are a way to create {{< param "PRODUCT_NAME" >}} configurations which can be
loaded as a component. Modules are a great way to parameterize a configuration
to create reusable pipelines.
-Modules are Grafana Agent Flow configurations which have:
+Modules are {{< param "PRODUCT_NAME" >}} configurations which have:
* Arguments: settings which configure a module.
* Exports: named values which a module exposes to the consumer of the module.
-* Components: Grafana Agent Flow Components to run when the module is running.
+* Components: {{< param "PRODUCT_NAME" >}} Components to run when the module is running.
-Modules are loaded into Grafana Agent Flow by using a [Module
+Modules are loaded into {{< param "PRODUCT_NAME" >}} by using a [Module
loader](#module-loaders).
Refer to the documentation for the [argument block][] and [export block][] to
@@ -31,7 +31,7 @@ learn how to define arguments and exports for a module.
## Module loaders
-A _Module loader_ is a Grafana Agent Flow component which retrieves a module
+A _Module loader_ is a {{< param "PRODUCT_NAME" >}} component which retrieves a module
and runs the components defined inside of it.
Module loader components are responsible for:
@@ -42,7 +42,7 @@ Module loader components are responsible for:
* Exposing exports from the loaded module.
Module loaders typically are called `module.LOADER_NAME`. The list of module
-loader components can be found in the list of Grafana Agent Flow
+loader components can be found in the list of {{< param "PRODUCT_NAME" >}}
[Components][].
Some module loaders may not support running modules with arguments or exports.
diff --git a/docs/sources/flow/config-language/_index.md b/docs/sources/flow/config-language/_index.md
index 845005fe9f7f..1392678a5f21 100644
--- a/docs/sources/flow/config-language/_index.md
+++ b/docs/sources/flow/config-language/_index.md
@@ -13,7 +13,7 @@ weight: 400
# Configuration language
-Grafana Agent Flow contains a custom configuration language called River to
+{{< param "PRODUCT_NAME" >}} contains a custom configuration language called River to
dynamically configure and connect components.
River aims to reduce errors in configuration files by making configurations
@@ -21,7 +21,7 @@ easier to read and write. River configurations are done in blocks which can be
easily copied-and-pasted from documentation to help users get started as
quickly as possible.
-A River configuration file tells Grafana Agent Flow which components to launch
+A River configuration file tells {{< param "PRODUCT_NAME" >}} which components to launch
and how to bind them together into a pipeline.
The syntax of River is centered around blocks, attributes, and expressions:
@@ -82,6 +82,6 @@ To help you write configuration files in River, the following tools are availabl
* [river-mode](https://github.com/jdbaldry/river-mode) for Emacs
* Code formatting using the [`agent fmt` command]({{< relref "../reference/cli/fmt" >}})
-You can also start developing your own tooling using the agent repository as a
+You can also start developing your own tooling using the {{< param "PRODUCT_ROOT_NAME" >}} repository as a
go package or use the [tree-sitter
grammar](https://github.com/grafana/tree-sitter-river) with other programming languages.
diff --git a/docs/sources/flow/config-language/components.md b/docs/sources/flow/config-language/components.md
index be675572aa7e..986567fe2610 100644
--- a/docs/sources/flow/config-language/components.md
+++ b/docs/sources/flow/config-language/components.md
@@ -12,7 +12,7 @@ weight: 300
---
# Components configuration language
-Components are the defining feature of Grafana Agent Flow. They are small,
+Components are the defining feature of {{< param "PRODUCT_NAME" >}}. They are small,
reusable pieces of business logic that perform a single task (like retrieving
secrets or collecting Prometheus metrics) and can be wired together to form
programmable pipelines of telemetry data.
diff --git a/docs/sources/flow/config-language/syntax.md b/docs/sources/flow/config-language/syntax.md
index ee2eebed9b65..284c2e63079b 100644
--- a/docs/sources/flow/config-language/syntax.md
+++ b/docs/sources/flow/config-language/syntax.md
@@ -55,7 +55,7 @@ to represent or compute more complex attribute values.
### Blocks
-_Blocks_ are used to configure the Agent behavior as well as Flow components by
+_Blocks_ are used to configure the {{< param "PRODUCT_ROOT_NAME" >}}'s behavior as well as {{< param "PRODUCT_NAME" >}} components by
grouping any number of attributes or nested blocks using curly braces.
Blocks have a _name_, an optional _label_ and a body that contains any number
of arguments and nested unlabeled blocks.
diff --git a/docs/sources/flow/getting-started/_index.md b/docs/sources/flow/getting-started/_index.md
index cbb1b02ea445..58238b81671d 100644
--- a/docs/sources/flow/getting-started/_index.md
+++ b/docs/sources/flow/getting-started/_index.md
@@ -6,14 +6,14 @@ aliases:
- /docs/grafana-cloud/send-data/agent/flow/getting-started/
- getting_started/
canonical: https://grafana.com/docs/agent/latest/flow/getting-started/
-description: Learn how to use Grafana Agent in flow mode
+description: Learn how to use Grafana Agent Flow
menuTitle: Get started
-title: Get started with Grafana Agent in flow mode
+title: Get started with Grafana Agent Flow
weight: 200
---
-# Get started with Grafana Agent in flow mode
+# Get started with {{< param "PRODUCT_NAME" >}}
-This section details guides for getting started with Grafana Agent in flow mode.
+This section details guides for getting started with {{< param "PRODUCT_NAME" >}}.
{{< section >}}
diff --git a/docs/sources/flow/getting-started/collect-opentelemetry-data.md b/docs/sources/flow/getting-started/collect-opentelemetry-data.md
index edb8b9cde67e..1fec48c7193a 100644
--- a/docs/sources/flow/getting-started/collect-opentelemetry-data.md
+++ b/docs/sources/flow/getting-started/collect-opentelemetry-data.md
@@ -12,7 +12,7 @@ weight: 300
# Collect OpenTelemetry data
-Grafana Agent Flow can be configured to collect [OpenTelemetry][]-compatible
+{{< param "PRODUCT_NAME" >}} can be configured to collect [OpenTelemetry][]-compatible
data and forward it to any OpenTelemetry-compatible endpoint.
This topic describes how to:
@@ -33,10 +33,9 @@ This topic describes how to:
* Ensure that you have basic familiarity with instrumenting applications with
OpenTelemetry.
-* Have a set of OpenTelemetry applications ready to push telemetry data to
- Grafana Agent Flow.
-* Identify where Grafana Agent Flow will write received telemetry data.
-* Be familiar with the concept of [Components][] in Grafana Agent Flow.
+* Have a set of OpenTelemetry applications ready to push telemetry data to {{< param "PRODUCT_NAME" >}}.
+* Identify where {{< param "PRODUCT_NAME" >}} will write received telemetry data.
+* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
## Configure an OpenTelemetry Protocol exporter
@@ -47,7 +46,7 @@ to an external system.
In this task, we will use the [otelcol.exporter.otlp][] component to send
OpenTelemetry data to a server using the OpenTelemetry Protocol (OTLP). Once an
-exporter component is defined, other Grafana Agent Flow components can be used
+exporter component is defined, other {{< param "PRODUCT_NAME" >}} components can be used
to forward data to it.
> Refer to the list of available [Components][] for the full list of
@@ -153,7 +152,7 @@ Protocol, refer to [otelcol.exporter.otlp][].
## Configure batching
-Production-ready Grafana Agent Flow configurations should not send
+Production-ready {{< param "PRODUCT_NAME" >}} configurations should not send
OpenTelemetry data directly to an exporter for delivery. Instead, data is
usually sent to one or more _processor components_ that perform various
transformations on the data.
@@ -238,14 +237,13 @@ For more information on configuring OpenTelemetry data batching, refer to
## Configure an OpenTelemetry Protocol receiver
-Grafana Agent Flow can be configured to receive OpenTelemetry metrics, logs,
+{{< param "PRODUCT_NAME" >}} can be configured to receive OpenTelemetry metrics, logs,
and traces. An OpenTelemetry _receiver_ component is responsible for receiving
OpenTelemetry data from an external system.
In this task, we will use the [otelcol.receiver.otlp][] component to receive
OpenTelemetry data over the network using the OpenTelemetry Protocol (OTLP). A
-receiver component can be configured to forward received data to other Grafana
-Agent Flow components.
+receiver component can be configured to forward received data to other {{< param "PRODUCT_NAME" >}} components.
> Refer to the list of available [Components][] for the full list of
> `otelcol.receiver` components that can be used to receive
diff --git a/docs/sources/flow/getting-started/collect-prometheus-metrics.md b/docs/sources/flow/getting-started/collect-prometheus-metrics.md
index a13646d409cb..614e6d866592 100644
--- a/docs/sources/flow/getting-started/collect-prometheus-metrics.md
+++ b/docs/sources/flow/getting-started/collect-prometheus-metrics.md
@@ -12,7 +12,7 @@ weight: 200
# Collect and forward Prometheus metrics
-Grafana Agent Flow can be configured to collect [Prometheus][] metrics and
+{{< param "PRODUCT_NAME" >}} can be configured to collect [Prometheus][] metrics and
forward them to any Prometheus-compatible database.
This topic describes how to:
@@ -35,7 +35,7 @@ This topic describes how to:
* Identify where you will write collected metrics. Metrics may be written to
Prometheus or Prometheus-compatible endpoints such as Grafana Mimir, Grafana
Cloud, or Grafana Enterprise Metrics.
-* Be familiar with the concept of [Components][] in Grafana Agent Flow.
+* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
## Configure metrics delivery
@@ -44,7 +44,7 @@ responsible for writing those metrics somewhere.
The [prometheus.remote_write][] component is responsible for delivering
Prometheus metrics to one or Prometheus-compatible endpoints. Once a
-`prometheus.remote_write` component is defined, other Grafana Agent Flow
+`prometheus.remote_write` component is defined, other {{< param "PRODUCT_NAME" >}}
components can be used to forward metrics to it.
To configure a `prometheus.remote_write` component for metrics delivery,
@@ -108,7 +108,7 @@ prometheus.remote_write "default" {
}
prometheus.scrape "example" {
- // Collect metrics from Grafana Agent's default listen address.
+ // Collect metrics from the default listen address.
targets = [{
__address__ = "127.0.0.1:12345",
}]
@@ -122,7 +122,7 @@ For more information on configuring metrics delivery, refer to
## Collect metrics from Kubernetes Pods
-Grafana Agent Flow can be configured to collect metrics from Kubernetes Pods
+{{< param "PRODUCT_NAME" >}} can be configured to collect metrics from Kubernetes Pods
by:
1. Discovering Kubernetes Pods to collect metrics from.
@@ -161,8 +161,7 @@ To collect metrics from Kubernetes Pods, complete the following steps:
}
```
- 1. If you don't want to search for Pods in the Namespace Grafana
- Agent is running in, set `own_namespace` to `false`.
+ 1. If you don't want to search for Pods in the Namespace {{< param "PRODUCT_NAME" >}} is running in, set `own_namespace` to `false`.
2. Replace `NAMESPACE_NAMES` with a comma-delimited list of strings
representing Namespaces to search. Each string must be wrapped in
@@ -226,7 +225,7 @@ To collect metrics from Kubernetes Pods, complete the following steps:
3. Replace `REMOTE_WRITE_LABEL` with the label chosen for your existing
`prometheus.remote_write` component.
-The following example demonstrates configuring Grafana Agent to collect metrics
+The following example demonstrates configuring {{< param "PRODUCT_NAME" >}} to collect metrics
from running production Kubernetes Pods in the `default` Namespace:
```river
@@ -262,7 +261,7 @@ metrics, refer to [discovery.kubernetes][] and [prometheus.scrape][].
## Collect metrics from Kubernetes Services
-Grafana Agent Flow can be configured to collect metrics from Kubernetes Services
+{{< param "PRODUCT_NAME" >}} can be configured to collect metrics from Kubernetes Services
by:
1. Discovering Kubernetes Services to collect metrics from.
@@ -301,8 +300,7 @@ To collect metrics from Kubernetes Services, complete the following steps:
}
```
- 1. If you do not want to search for Services in the Namespace Grafana
- Agent is running in, set `own_namespace` to `false`.
+ 1. If you do not want to search for Services in the Namespace {{< param "PRODUCT_NAME" >}} is running in, set `own_namespace` to `false`.
2. Replace `NAMESPACE_NAMES` with a comma-delimited list of strings
representing Namespaces to search. Each string must be wrapped in
@@ -366,7 +364,7 @@ To collect metrics from Kubernetes Services, complete the following steps:
3. Replace `REMOTE_WRITE_LABEL` with the label chosen for your existing
`prometheus.remote_write` component.
-The following example demonstrates configuring Grafana Agent to collect metrics
+The following example demonstrates configuring {{< param "PRODUCT_NAME" >}} to collect metrics
from running production Kubernetes Services in the `default` Namespace:
```river
@@ -402,7 +400,7 @@ metrics, refer to [discovery.kubernetes][] and [prometheus.scrape][].
## Collect metrics from custom targets
-Grafana Agent Flow can be configured to collect metrics from a custom set of
+{{< param "PRODUCT_NAME" >}} can be configured to collect metrics from a custom set of
targets without the need for service discovery.
To collect metrics from a custom set of targets, complete the following steps:
diff --git a/docs/sources/flow/getting-started/configure-agent-clustering.md b/docs/sources/flow/getting-started/configure-agent-clustering.md
index 2a67fc29ba1b..1127ed25c268 100644
--- a/docs/sources/flow/getting-started/configure-agent-clustering.md
+++ b/docs/sources/flow/getting-started/configure-agent-clustering.md
@@ -11,10 +11,10 @@ title: Configure Grafana Agent clustering in an existing installation
weight: 400
---
-# Configure Grafana Agent clustering in an existing installation
+# Configure {{< param "PRODUCT_NAME" >}} clustering in an existing installation
-You can configure Grafana Agent to run with [clustering][] so that
-individual agents can work together for workload distribution and high
+You can configure {{< param "PRODUCT_NAME" >}} to run with [clustering][] so that
+individual {{< param "PRODUCT_ROOT_NAME" >}}s can work together for workload distribution and high
availability.
@@ -23,10 +23,10 @@ availability.
This topic describes how to add clustering to an existing installation.
-## Configure Grafana Agent clustering with Helm Chart
+## Configure {{< param "PRODUCT_NAME" >}} clustering with Helm Chart
-This section guides you through enabling clustering when Grafana Agent is
-installed on Kubernetes using the [Grafana Agent Helm chart][install-helm].
+This section guides you through enabling clustering when {{< param "PRODUCT_NAME" >}} is
+installed on Kubernetes using the {{< param "PRODUCT_ROOT_NAME" >}} [Helm chart][install-helm].
### Before you begin
@@ -55,7 +55,7 @@ To configure clustering:
Replace `RELEASE_NAME` with the name of the installation you chose when you
installed the Helm chart.
-1. Use the [Grafana Agent UI][UI] to verify the cluster status:
+1. Use the {{< param "PRODUCT_NAME" >}} [UI][] to verify the cluster status:
1. Click **Clustering** in the navigation bar.
diff --git a/docs/sources/flow/getting-started/distribute-prometheus-scrape-load.md b/docs/sources/flow/getting-started/distribute-prometheus-scrape-load.md
index e069640b4e5d..fab5f8763489 100644
--- a/docs/sources/flow/getting-started/distribute-prometheus-scrape-load.md
+++ b/docs/sources/flow/getting-started/distribute-prometheus-scrape-load.md
@@ -13,9 +13,9 @@ weight: 500
# Distribute Prometheus metrics scrape load
-A good predictor for the size of an agent deployment is the number of
-Prometheus targets each agent scrapes. [Clustering][] with target
-auto-distribution allows a fleet of agents to work together to dynamically
+A good predictor for the size of an {{< param "PRODUCT_NAME" >}} deployment is the number of
+Prometheus targets each {{< param "PRODUCT_ROOT_NAME" >}} scrapes. [Clustering][] with target
+auto-distribution allows a fleet of {{< param "PRODUCT_ROOT_NAME" >}}s to work together to dynamically
distribute their scrape load, providing high-availability.
> **Note:** Clustering is a [beta][] feature. Beta features are subject to breaking
@@ -23,10 +23,10 @@ distribute their scrape load, providing high-availability.
## Before you begin
-- Familiarize yourself with how to [configure existing Grafana Agent installations][configure-grafana-agent].
+- Familiarize yourself with how to [configure existing {{< param "PRODUCT_NAME" >}} installations][configure-grafana-agent].
- [Configure Prometheus metrics collection][].
-- [Configure clustering][] of agents.
-- Ensure that all of your clustered agents have the same configuration file.
+- [Configure clustering][].
+- Ensure that all of your clustered {{< param "PRODUCT_ROOT_NAME" >}}s have the same configuration file.
## Steps
@@ -41,15 +41,14 @@ To distribute Prometheus metrics scrape load with clustering:
}
```
-2. Restart or reload agents for them to use the new configuration.
+1. Restart or reload {{< param "PRODUCT_ROOT_NAME" >}}s for them to use the new configuration.
-3. Validate that auto-distribution is functioning:
+1. Validate that auto-distribution is functioning:
- 1. Using the [Grafana Agent UI][UI] on each agent, navigate to the details page for one of
+ 1. Using the {{< param "PRODUCT_ROOT_NAME" >}} [UI][] on each {{< param "PRODUCT_ROOT_NAME" >}}, navigate to the details page for one of
the `prometheus.scrape` components you modified.
- 2. Compare the Debug Info sections between two different agents to ensure
- that they're not scraping the same sets of targets.
+ 1. Compare the Debug Info sections between two different {{< param "PRODUCT_ROOT_NAME" >}} to ensure that they're not scraping the same sets of targets.
{{% docs/reference %}}
[Clustering]: "/docs/agent/ -> /docs/agent//flow/concepts/clustering.md"
diff --git a/docs/sources/flow/getting-started/migrating-from-operator.md b/docs/sources/flow/getting-started/migrating-from-operator.md
index c017651cd46d..33dd4fda3c97 100644
--- a/docs/sources/flow/getting-started/migrating-from-operator.md
+++ b/docs/sources/flow/getting-started/migrating-from-operator.md
@@ -9,21 +9,22 @@ title: Migrating from Grafana Agent Operator to Grafana Agent Flow
weight: 320
---
-# Migrating from Grafana Agent Operator to Grafana Agent Flow
+# Migrating from Grafana Agent Operator to {{< param "PRODUCT_NAME" >}}
-With the release of Flow, Grafana Agent Operator is no longer the recommended way to deploy Grafana Agent in Kubernetes. Some of the Operator functionality has been moved into Grafana Agent
-itself, and the remaining functionality has been replaced by our Helm Chart.
+With the release of {{< param "PRODUCT_NAME" >}}, Grafana Agent Operator is no longer the recommended way to deploy {{< param "PRODUCT_ROOT_NAME" >}} in Kubernetes.
+Some of the Operator functionality has been moved into {{< param "PRODUCT_NAME" >}} itself, and the remaining functionality has been replaced by our Helm Chart.
-- The Monitor types (`PodMonitor`, `ServiceMonitor`, `Probe`, and `LogsInstance`) are all supported natively by Grafana Agent in Flow mode. You are no longer
-required to use the Operator to consume those CRDs for dynamic monitoring in your cluster.
-- The parts of the Operator that deploy the Agent itself (`GrafanaAgent`, `MetricsInstance`, and `LogsInstance` CRDs) are deprecated. We now recommend
-operator users use the [Grafana Agent Helm Chart](https://grafana.com/docs/agent/latest/flow/setup/install/kubernetes/) to deploy the Agent directly to your clusters.
+- The Monitor types (`PodMonitor`, `ServiceMonitor`, `Probe`, and `LogsInstance`) are all supported natively by {{< param "PRODUCT_NAME" >}}.
+ You are no longer required to use the Operator to consume those CRDs for dynamic monitoring in your cluster.
+- The parts of the Operator that deploy the {{< param "PRODUCT_ROOT_NAME" >}} itself (`GrafanaAgent`, `MetricsInstance`, and `LogsInstance` CRDs) are deprecated.
+ We now recommend operator users use the {{< param "PRODUCT_ROOT_NAME" >}} [Helm Chart](https://grafana.com/docs/agent/latest/flow/setup/install/kubernetes/) to deploy {{< param "PRODUCT_ROOT_NAME" >}} directly to your clusters.
-This guide will provide some steps to get started with Grafana Agent for users coming from Grafana Agent Operator.
+This guide will provide some steps to get started with {{< param "PRODUCT_NAME" >}} for users coming from Grafana Agent Operator.
-## Deploy Grafana Agent with Helm
+## Deploy {{< param "PRODUCT_NAME" >}} with Helm
-1. You will need to create a `values.yaml` file, which contains options for deploying your Agent. You may start with the [default values](https://github.com/grafana/agent/blob/main/operations/helm/charts/grafana-agent/values.yaml) and customize as you see fit, or start with this snippet, which should be a good starting point for what the Operator does:
+1. You will need to create a `values.yaml` file, which contains options for deploying your {{< param "PRODUCT_ROOT_NAME" >}}.
+You may start with the [default values](https://github.com/grafana/agent/blob/main/operations/helm/charts/grafana-agent/values.yaml) and customize as you see fit, or start with this snippet, which should be a good starting point for what the Operator does:
```yaml
agent:
@@ -39,15 +40,15 @@ This guide will provide some steps to get started with Grafana Agent for users c
create: false
```
- This configuration will deploy Grafana Agent as a `StatefulSet` using the built-in [clustering](https://grafana.com/docs/agent/latest/flow/concepts/clustering/) functionality to allow distributing scrapes across all Agent Pods.
-
- This is not the only deployment mode possible. For example, you may want to use a `DaemonSet` to collect host-level logs or metrics. See [the Agent deployment guide](https://grafana.com/docs/agent/latest/flow/setup/deploy-agent/) for more details about different topologies.
+ This configuration will deploy {{< param "PRODUCT_NAME" >}} as a `StatefulSet` using the built-in [clustering](https://grafana.com/docs/agent/latest/flow/concepts/clustering/) functionality to allow distributing scrapes across all {{< param "PRODUCT_ROOT_NAME" >}} Pods.
-2. Create a Flow config file, `agent.river`.
+ This is not the only deployment mode possible. For example, you may want to use a `DaemonSet` to collect host-level logs or metrics. See the {{< param "PRODUCT_NAME" >}} [deployment guide](https://grafana.com/docs/agent/latest/flow/setup/deploy-agent/) for more details about different topologies.
- We will be adding to this config in the next step as we convert `MetricsInstances`. You can add any additional configuration to this file as you desire.
+2. Create a {{< param "PRODUCT_ROOT_NAME" >}} configuration file, `agent.river`.
-3. Install the grafana helm repository:
+ We will be adding to this configuration in the next step as we convert `MetricsInstances`. You can add any additional configuration to this file as you desire.
+
+3. Install the Grafana Helm repository:
```
helm repo add grafana https://grafana.github.io/helm-charts
@@ -66,10 +67,10 @@ This guide will provide some steps to get started with Grafana Agent for users c
A `MetricsInstance` resource primarily defines:
-- The remote endpoint(s) Grafana Agent should send metrics to.
-- Which `PodMonitor`, `ServiceMonitor`, and `Probe` resources this Agent should discover.
+- The remote endpoint(s) {{< param "PRODUCT_NAME" >}} should send metrics to.
+- Which `PodMonitor`, `ServiceMonitor`, and `Probe` resources this {{< param "PRODUCT_ROOT_NAME" >}} should discover.
-These functions can be done in Grafana Agent Flow with the `prometheus.remote_write`, `prometheus.operator.podmonitors`, `prometheus.operator.servicemonitors`, and `prometheus.operator.probes` components respectively.
+These functions can be done in {{< param "PRODUCT_NAME" >}} with the `prometheus.remote_write`, `prometheus.operator.podmonitors`, `prometheus.operator.servicemonitors`, and `prometheus.operator.probes` components respectively.
This is a River sample that is equivalent to the `MetricsInstance` from our [operator guide](https://grafana.com/docs/agent/latest/operator/deploy-agent-operator-resources/#deploy-a-metricsinstance-resource):
@@ -113,7 +114,7 @@ You will need to replace `PROMETHEUS_URL` with the actual endpoint you want to s
This configuration will discover all `PodMonitor`, `ServiceMonitor`, and `Probe` resources in your cluster that match our label selector `instance=primary`. It will then scrape metrics from their targets and forward them to your remote write endpoint.
-You may need to customize this configuration further if you use additional features in your `MetricsInstance` resources. Refer to the documentation for the relevant components for additional information:
+You may need to customize this configuration further if you use additional features in your `MetricsInstance` resources. Refer to the documentation for the relevant components for additional information:
- [remote.kubernetes.secret](https://grafana.com/docs/agent/latest/flow/reference/components/remote.kubernetes.secret)
- [prometheus.remote_write](https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.remote_write)
@@ -124,10 +125,10 @@ You may need to customize this configuration further if you use additional featu
## Collecting Logs
-Our current recommendation is to create an additional DaemonSet deployment of Grafana Agents to scrape logs.
+Our current recommendation is to create an additional DaemonSet deployment of {{< param "PRODUCT_ROOT_NAME" >}}s to scrape logs.
-> We have components that can scrape pod logs directly from the Kubernetes API without needing a DaemonSet deployment. These are
-> still considered experimental, but if you would like to try them, see the documentation for [loki.source.kubernetes](https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.kubernetes/) and
+> We have components that can scrape pod logs directly from the Kubernetes API without needing a DaemonSet deployment. These are
+> still considered experimental, but if you would like to try them, see the documentation for [loki.source.kubernetes](https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.kubernetes/) and
> [loki.source.podlogs](https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.podlogs/).
These values are close to what the Operator currently deploys for logs:
@@ -262,4 +263,4 @@ and has many options for processing logs. For further details see the [component
## Integrations
-The `Integration` CRD is not supported with Grafana Agent Flow, however all static mode integrations have an equivalent component in the [`prometheus.exporter`](https://grafana.com/docs/agent/latest/flow/reference/components) namespace. The reference docs should help convert those integrations to their Flow equivalent.
+The `Integration` CRD is not supported with {{< param "PRODUCT_NAME" >}}, however all static mode integrations have an equivalent component in the [`prometheus.exporter`](https://grafana.com/docs/agent/latest/flow/reference/components) namespace. The reference docs should help convert those integrations to their Flow equivalent.
diff --git a/docs/sources/flow/getting-started/migrating-from-prometheus.md b/docs/sources/flow/getting-started/migrating-from-prometheus.md
index d4f5b1616f77..bbeff0bf1d0c 100644
--- a/docs/sources/flow/getting-started/migrating-from-prometheus.md
+++ b/docs/sources/flow/getting-started/migrating-from-prometheus.md
@@ -11,14 +11,14 @@ title: Migrate from Prometheus to Grafana Agent Flow
weight: 320
---
-# Migrate from Prometheus to Grafana Agent Flow
+# Migrate from Prometheus to {{< param "PRODUCT_NAME" >}}
-The built-in Grafana Agent convert command can migrate your [Prometheus][] configuration to a Grafana Agent flow configuration.
+The built-in {{< param "PRODUCT_ROOT_NAME" >}} convert command can migrate your [Prometheus][] configuration to a {{< param "PRODUCT_NAME" >}} configuration.
This topic describes how to:
-* Convert a Prometheus configuration to a flow configuration.
-* Run a Prometheus configuration natively using Grafana Agent flow mode.
+* Convert a Prometheus configuration to a {{< param "PRODUCT_NAME" >}} configuration.
+* Run a Prometheus configuration natively using {{< param "PRODUCT_NAME" >}}.
## Components used in this topic
@@ -28,15 +28,15 @@ This topic describes how to:
## Before you begin
* You must have an existing Prometheus configuration.
-* You must have a set of Prometheus applications ready to push telemetry data to Grafana Agent.
-* You must be familiar with the concept of [Components][] in Grafana Agent flow mode.
+* You must have a set of Prometheus applications ready to push telemetry data to {{< param "PRODUCT_NAME" >}}.
+* You must be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
## Convert a Prometheus configuration
-To fully migrate your configuration from [Prometheus] to Grafana Agent
-in flow mode, you must convert your Prometheus configuration into a Grafana Agent flow
-mode configuration. This conversion will enable you to take full advantage of the many
-additional features available in Grafana Agent flow mode.
+To fully migrate your configuration from [Prometheus] to {{< param "PRODUCT_NAME" >}},
+you must convert your Prometheus configuration into a {{< param "PRODUCT_NAME" >}} configuration.
+This conversion will enable you to take full advantage of the many
+additional features available in {{< param "PRODUCT_NAME" >}}.
> In this task, we will use the [convert][] CLI command to output a flow
> configuration from a Prometheus configuration.
@@ -57,9 +57,9 @@ additional features available in Grafana Agent flow mode.
Replace the following:
* `INPUT_CONFIG_PATH`: The full path to the Prometheus configuration.
- * `OUTPUT_CONFIG_PATH`: The full path to output the flow configuration.
+ * `OUTPUT_CONFIG_PATH`: The full path to output the {{< param "PRODUCT_NAME" >}} configuration.
-1. [Start the agent][] in flow mode using the new flow configuration from `OUTPUT_CONFIG_PATH`:
+1. [Start][] {{< param "PRODUCT_NAME" >}} using the new flow configuration from `OUTPUT_CONFIG_PATH`:
### Debugging
@@ -112,27 +112,26 @@ additional features available in Grafana Agent flow mode.
## Run a Prometheus configuration
-If you’re not ready to completely switch to a flow configuration, you can run Grafana Agent using your existing Prometheus configuration.
-The `--config.format=prometheus` flag tells Grafana Agent to convert your Prometheus configuration to flow mode and load it directly without saving the new configuration.
-This allows you to try flow mode without modifying your existing Prometheus configuration infrastructure.
+If you’re not ready to completely switch to a flow configuration, you can run {{< param "PRODUCT_ROOT_NAME" >}} using your existing Prometheus configuration.
+The `--config.format=prometheus` flag tells {{< param "PRODUCT_ROOT_NAME" >}} to convert your Prometheus configuration to a {{< param "PRODUCT_NAME" >}} configuration and load it directly without saving the new configuration.
+This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your existing Prometheus configuration infrastructure.
-> In this task, we will use the [run][] CLI command to run Grafana Agent in flow
-> mode using a Prometheus configuration.
+> In this task, we will use the [run][] CLI command to run {{< param "PRODUCT_NAME" >}}
+> using a Prometheus configuration.
-[Start the agent][] in flow mode and include the command line flag
+[Start][] {{< param "PRODUCT_NAME" >}} and include the command line flag
`--config.format=prometheus`. Your configuration file must be a valid
- Prometheus configuration file rather than a flow mode configuration file.
+ Prometheus configuration file rather than a {{< param "PRODUCT_NAME" >}} configuration file.
### Debugging
1. You can follow the convert CLI command [debugging][] instructions to
generate a diagnostic report.
-1. Refer to the Grafana Agent [Flow Debugging][] for more information about a running Grafana
- Agent in flow mode.
+1. Refer to the {{< param "PRODUCT_NAME" >}} [Debugging][DebuggingUI] for more information about a running {{< param "PRODUCT_NAME" >}}.
-1. If your Prometheus configuration cannot be converted and
- loaded directly into Grafana Agent, diagnostic information
+1. If your Prometheus configuration can't be converted and
+ loaded directly into {{< param "PRODUCT_NAME" >}}, diagnostic information
is sent to `stderr`. You can bypass any non-critical issues
and start the Agent by including the
`--config.bypass-conversion-errors` flag in addition to
@@ -145,7 +144,7 @@ This allows you to try flow mode without modifying your existing Prometheus conf
## Example
-This example demonstrates converting a Prometheus configuration file to a Grafana Agent flow mode configuration file.
+This example demonstrates converting a Prometheus configuration file to a {{< param "PRODUCT_NAME" >}} configuration file.
The following Prometheus configuration file provides the input for the conversion:
@@ -217,26 +216,26 @@ prometheus.remote_write "default" {
## Limitations
-Configuration conversion is done on a best-effort basis. The Agent will issue
+Configuration conversion is done on a best-effort basis. {{< param "PRODUCT_ROOT_NAME" >}} will issue
warnings or errors where the conversion cannot be performed.
Once the configuration is converted, we recommend that you review
-the Flow Mode configuration file created and verify that it is correct
+the {{< param "PRODUCT_NAME" >}} configuration file created and verify that it is correct
before starting to use it in a production environment.
Furthermore, we recommend that you review the following checklist:
-* The following configurations are not available for conversion to flow mode:
+* The following configurations are not available for conversion to {{< param "PRODUCT_NAME" >}}:
`rule_files`, `alerting`, `remote_read`, `storage`, and `tracing`. Any
additional unsupported features are returned as errors during conversion.
* Check if you are using any extra command line arguments with Prometheus that
are not present in your configuration file. For example, `--web.listen-address`.
-* Metamonitoring metrics exposed by the Flow Mode usually match Prometheus
+* Metamonitoring metrics exposed by {{< param "PRODUCT_NAME" >}} usually match Prometheus
metamonitoring metrics but will use a different name. Make sure that you use
the new metric names, for example, in your alerts and dashboards queries.
-* The logs produced by Grafana Agent will differ from those
+* The logs produced by {{< param "PRODUCT_NAME" >}} differ from those
produced by Prometheus.
-* Grafana Agent exposes the [Grafana Agent Flow UI][].
+* {{< param "PRODUCT_ROOT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI][].
[Prometheus]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/
[debugging]: #debugging
@@ -252,12 +251,12 @@ Furthermore, we recommend that you review the following checklist:
[convert]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/reference/cli/convert.md"
[run]: "/docs/agent/ -> /docs/agent//flow/reference/cli/run.md"
[run]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/reference/cli/run.md"
-[Start the agent]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md"
-[Start the agent]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md"
-[Flow Debugging]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging.md"
-[Flow Debugging]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging.md"
+[Start]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md"
+[Start]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md"
+[DebuggingUI]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging.md"
+[DebuggingUI]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging.md"
[River]: "/docs/agent/ -> /docs/agent//flow/config-language/_index.md"
[River]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/config-language/_index.md"
-[Grafana Agent Flow UI]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging#grafana-agent-flow-ui"
-[Grafana Agent Flow UI]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging#grafana-agent-flow-ui"
+[UI]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging#grafana-agent-flow-ui"
+[UI]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging#grafana-agent-flow-ui"
{{% /docs/reference %}}
diff --git a/docs/sources/flow/getting-started/migrating-from-promtail.md b/docs/sources/flow/getting-started/migrating-from-promtail.md
index 178f82b1ae7b..b6ddcd68e570 100644
--- a/docs/sources/flow/getting-started/migrating-from-promtail.md
+++ b/docs/sources/flow/getting-started/migrating-from-promtail.md
@@ -11,15 +11,15 @@ title: Migrate from Promtail to Grafana Agent Flow
weight: 330
---
-# Migrate from Promtail to Grafana Agent Flow
+# Migrate from Promtail to {{< param "PRODUCT_NAME" >}}
-The built-in Grafana Agent convert command can migrate your [Promtail][]
-configuration to a Grafana Agent flow configuration.
+The built-in {{< param "PRODUCT_ROOT_NAME" >}} convert command can migrate your [Promtail][]
+configuration to a {{< param "PRODUCT_NAME" >}} configuration.
This topic describes how to:
-* Convert a Promtail configuration to a Flow Mode configuration.
-* Run a Promtail configuration natively using Grafana Agent Flow Mode.
+* Convert a Promtail configuration to a {{< param "PRODUCT_NAME" >}} configuration.
+* Run a Promtail configuration natively using {{< param "PRODUCT_NAME" >}}.
## Components used in this topic
@@ -30,14 +30,14 @@ This topic describes how to:
## Before you begin
* You must have an existing Promtail configuration.
-* You must be familiar with the concept of [Components][] in Grafana Agent Flow mode.
+* You must be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
## Convert a Promtail configuration
-To fully migrate from [Promtail] to Grafana Agent Flow Mode, you must convert
-your Promtail configuration into a Grafana Agent Flow Mode configuration. This
+To fully migrate from [Promtail] to {{< param "PRODUCT_NAME" >}}, you must convert
+your Promtail configuration into a {{< param "PRODUCT_NAME" >}} configuration. This
conversion will enable you to take full advantage of the many additional
-features available in Grafana Agent Flow Mode.
+features available in {{< param "PRODUCT_NAME" >}}.
> In this task, we will use the [convert][] CLI command to output a flow
> configuration from a Promtail configuration.
@@ -61,7 +61,7 @@ features available in Grafana Agent Flow Mode.
* `INPUT_CONFIG_PATH`: The full path to the Promtail configuration.
* `OUTPUT_CONFIG_PATH`: The full path to output the flow configuration.
-1. [Start the Agent][] in Flow Mode using the new flow configuration
+1. [Start][] {{< param "PRODUCT_NAME" >}} using the new flow configuration
from `OUTPUT_CONFIG_PATH`:
### Debugging
@@ -108,23 +108,22 @@ features available in Grafana Agent Flow Mode.
report provides the following information:
```plaintext
- (Warning) If you have a tracing set up for Promtail, it cannot be migrated to Flow Mode automatically. Refer to the documentation on how to configure tracing in Flow Mode.
- (Warning) The Agent Flow Mode's metrics are different from the metrics emitted by Promtail. If you rely on Promtail's metrics, you must change your configuration, for example, your alerts and dashboards.
+ (Warning) If you have a tracing set up for Promtail, it cannot be migrated to {{< param "PRODUCT_NAME" >}} automatically. Refer to the documentation on how to configure tracing in {{< param "PRODUCT_NAME" >}}.
+ (Warning) The metrics from {{< param "PRODUCT_NAME" >}} are different from the metrics emitted by Promtail. If you rely on Promtail's metrics, you must change your configuration, for example, your alerts and dashboards.
```
## Run a Promtail configuration
If you’re not ready to completely switch to a flow configuration, you can run
-Grafana Agent using your existing Promtail configuration.
-The `--config.format=promtail` flag tells Grafana Agent to convert your Promtail
-configuration to Flow Mode and load it directly without saving the new
-configuration. This allows you to try Flow Mode without modifying your existing
+{{< param "PRODUCT_ROOT_NAME" >}} using your existing Promtail configuration.
+The `--config.format=promtail` flag tells {{< param "PRODUCT_ROOT_NAME" >}} to convert your Promtail
+configuration to {{< param "PRODUCT_NAME" >}} and load it directly without saving the new
+configuration. This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your existing
Promtail configuration infrastructure.
-> In this task, we will use the [run][] CLI command to run Grafana Agent in Flow
-> mode using a Promtail configuration.
+> In this task, we will use the [run][] CLI command to run {{< param "PRODUCT_NAME" >}} using a Promtail configuration.
-[Start the Agent][] in Flow mode and include the command line flag
+[Start][] {{< param "PRODUCT_NAME" >}} and include the command line flag
`--config.format=promtail`. Your configuration file must be a valid Promtail
configuration file rather than a Flow mode configuration file.
@@ -133,12 +132,12 @@ configuration file rather than a Flow mode configuration file.
1. You can follow the convert CLI command [debugging][] instructions to generate
a diagnostic report.
-1. Refer to the Grafana Agent [Flow Debugging][] for more information about
- running Grafana Agent in Flow mode.
+1. Refer to the {{< param "PRODUCT_NAME" >}} [Debugging][DebuggingUI] for more information about
+ running {{< param "PRODUCT_NAME" >}}.
1. If your Promtail configuration can't be converted and loaded directly into
- Grafana Agent, diagnostic information is sent to `stderr`. You can bypass any
- non-critical issues and start the Agent by including the
+ {{< param "PRODUCT_ROOT_NAME" >}}, diagnostic information is sent to `stderr`. You can bypass any
+ non-critical issues and start {{< param "PRODUCT_ROOT_NAME" >}} by including the
`--config.bypass-conversion-errors` flag in addition to
`--config.format=promtail`.
@@ -149,8 +148,7 @@ configuration file rather than a Flow mode configuration file.
## Example
-This example demonstrates converting a Promtail configuration file to a Grafana
-Agent Flow mode configuration file.
+This example demonstrates converting a Promtail configuration file to a {{< param "PRODUCT_NAME" >}} configuration file.
The following Promtail configuration file provides the input for the conversion:
@@ -180,7 +178,7 @@ grafana-agent-flow convert --source-format=promtail --output=OUTPUT_CONFIG_PATH
{{< /code >}}
-The new Flow Mode configuration file looks like this:
+The new {{< param "PRODUCT_NAME" >}} configuration file looks like this:
```river
local.file_match "example" {
@@ -205,11 +203,11 @@ loki.write "default" {
## Limitations
-Configuration conversion is done on a best-effort basis. Grafana Agent will issue
+Configuration conversion is done on a best-effort basis. {{< param "PRODUCT_ROOT_NAME" >}} will issue
warnings or errors where the conversion can't be performed.
Once the configuration is converted, we recommend that you review
-the Flow Mode configuration file created, and verify that it's correct
+the {{< param "PRODUCT_NAME" >}} configuration file created, and verify that it's correct
before starting to use it in a production environment.
Furthermore, we recommend that you review the following checklist:
@@ -219,16 +217,16 @@ Furthermore, we recommend that you review the following checklist:
* Check if you are setting any environment variables,
whether [expanded in the config file][] itself or consumed directly by
Promtail, such as `JAEGER_AGENT_HOST`.
-* In Flow Mode, the positions file is saved at a different location.
+* In {{< param "PRODUCT_NAME" >}}, the positions file is saved at a different location.
Refer to the [loki.source.file][] documentation for more details. Check if you have any existing
setup, for example, a Kubernetes Persistent Volume, that you must update to use the new
positions file path.
* Metamonitoring metrics exposed by the Flow Mode usually match Promtail
metamonitoring metrics but will use a different name. Make sure that you
use the new metric names, for example, in your alerts and dashboards queries.
-* Note that the logs produced by the Agent will differ from those
+* Note that the logs produced by {{< param "PRODUCT_NAME" >}} will differ from those
produced by Promtail.
-* Note that the Agent exposes the [Grafana Agent Flow UI][], which differs
+* Note that {{< param "PRODUCT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI][], which differs
from Promtail's Web UI.
[Promtail]: https://www.grafana.com/docs/loki//clients/promtail/
@@ -248,12 +246,12 @@ Furthermore, we recommend that you review the following checklist:
[convert]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/reference/cli/convert.md"
[run]: "/docs/agent/ -> /docs/agent//flow/reference/cli/run.md"
[run]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/reference/cli/run.md"
-[Start the agent]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md"
-[Start the agent]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md"
-[Flow Debugging]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging.md"
-[Flow Debugging]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging.md"
+[Start]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md"
+[Start]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md"
+[DebuggingUI]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging.md"
+[DebuggingUI]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging.md"
[River]: "/docs/agent/ -> /docs/agent//flow/config-language/_index.md"
[River]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/config-language/_index.md"
-[Grafana Agent Flow UI]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging#grafana-agent-flow-ui"
-[Grafana Agent Flow UI]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging#grafana-agent-flow-ui"
+[UI]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging#grafana-agent-flow-ui"
+[UI]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging#grafana-agent-flow-ui"
{{% /docs/reference %}}
diff --git a/docs/sources/flow/getting-started/migrating-from-static.md b/docs/sources/flow/getting-started/migrating-from-static.md
index 6db39ede8673..f425c7bdda8f 100644
--- a/docs/sources/flow/getting-started/migrating-from-static.md
+++ b/docs/sources/flow/getting-started/migrating-from-static.md
@@ -5,22 +5,20 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/migrating-from-static/
- /docs/grafana-cloud/send-data/agent/flow/getting-started/migrating-from-static/
canonical: https://grafana.com/docs/agent/latest/flow/getting-started/migrating-from-static/
-description: Learn how to migrate your configuration from Grafana Agent Static mode
- to Flow mode
-menuTitle: Migrate from Static mode to Flow mode
-title: Migrate Grafana Agent from Static mode to Flow mode
+description: Learn how to migrate your configuration from Grafana Agent Static to Grafana Agent Flow
+menuTitle: Migrate from Static to Flow
+title: Migrate Grafana Agent Static to Grafana Agent Flow
weight: 340
---
-# Migrate Grafana Agent from Static mode to Flow mode
+# Migrate from {{< param "PRODUCT_ROOT_NAME" >}} Static to {{< param "PRODUCT_NAME" >}}
-The built-in Grafana Agent convert command can migrate your [Static][] mode
-configuration to a Flow mode configuration.
+The built-in {{< param "PRODUCT_ROOT_NAME" >}} convert command can migrate your [Static][] configuration to a {{< param "PRODUCT_NAME" >}} configuration.
This topic describes how to:
-* Convert a Grafana Agent Static mode configuration to a Flow mode configuration.
-* Run a Grafana Agent Static mode configuration natively using Grafana Agent Flow mode.
+* Convert a Grafana Agent Static configuration to a {{< param "PRODUCT_NAME" >}} configuration.
+* Run a Grafana Agent Static configuration natively using {{< param "PRODUCT_NAME" >}}.
## Components used in this topic
@@ -33,18 +31,18 @@ This topic describes how to:
## Before you begin
-* You must have an existing Grafana Agent Static mode configuration.
-* You must be familiar with the [Components][] concept in Grafana Agent Flow mode.
+* You must have an existing Grafana Agent Static configuration.
+* You must be familiar with the [Components][] concept in {{< param "PRODUCT_NAME" >}}.
-## Convert a Static mode configuration
+## Convert a Grafana Agent Static configuration
-To fully migrate Grafana Agent from [Static][] mode to Flow mode, you must convert
-your Static mode configuration into a Flow mode configuration.
+To fully migrate Grafana Agent [Static][] to {{< param "PRODUCT_NAME" >}}, you must convert
+your Static configuration into a {{< param "PRODUCT_NAME" >}} configuration.
This conversion will enable you to take full advantage of the many additional
-features available in Grafana Agent Flow mode.
+features available in {{< param "PRODUCT_NAME" >}}.
-> In this task, we will use the [convert][] CLI command to output a Flow mode
-> configuration from a Static mode configuration.
+> In this task, we will use the [convert][] CLI command to output a {{< param "PRODUCT_NAME" >}}
+> configuration from a Static configuration.
1. Open a terminal window and run the following command:
@@ -62,19 +60,21 @@ features available in Grafana Agent Flow mode.
Replace the following:
* `INPUT_CONFIG_PATH`: The full path to the [Static][] configuration.
- * `OUTPUT_CONFIG_PATH`: The full path to output the flow configuration.
+ * `OUTPUT_CONFIG_PATH`: The full path to output the {{< param "PRODUCT_NAME" >}} configuration.
-1. [Start the Agent][] in Flow mode using the new Flow mode configuration
+1. [Start][] {{< param "PRODUCT_NAME" >}} using the new {{< param "PRODUCT_NAME" >}} configuration
from `OUTPUT_CONFIG_PATH`:
### Debugging
-1. If the convert command cannot convert a [Static] mode configuration, diagnostic
- information is sent to `stderr`. You can use the `--bypass-errors` flag to
- bypass any non-critical issues and output the Flow mode configuration
+1. If the convert command cannot convert a [Static] configuration, diagnostic
+ information is sent to `stderr`. You can use the `--bypass-errors` flag to
+ bypass any non-critical issues and output the {{< param "PRODUCT_NAME" >}} configuration
using a best-effort conversion.
- {{% admonition type="caution" %}}If you bypass the errors, the behavior of the converted configuration may not match the original [Static] mode configuration. Make sure you fully test the converted configuration before using it in a production environment.{{% /admonition %}}
+ {{% admonition type="caution" %}}
+ If you bypass the errors, the behavior of the converted configuration may not match the original [Static] configuration. Make sure you fully test the converted configuration before using it in a production environment.
+ {{% /admonition %}}
{{< code >}}
@@ -104,7 +104,7 @@ features available in Grafana Agent Flow mode.
* Replace `OUTPUT_REPORT_PATH` with the output path for the report.
- Using the [example](#example) Grafana Agent Static Mode configuration below, the diagnostic
+ Using the [example](#example) Grafana Agent Static configuration below, the diagnostic
report provides the following information:
```plaintext
@@ -113,40 +113,41 @@ features available in Grafana Agent Flow mode.
## Run a Static mode configuration
-If you’re not ready to completely switch to a Flow mode configuration, you can run
-Grafana Agent using your existing Grafana Agent Static mode configuration.
-The `--config.format=static` flag tells Grafana Agent to convert your [Static] mode
-configuration to Flow mode and load it directly without saving the new
-configuration. This allows you to try Flow mode without modifying your existing
-Grafana Agent Static mode configuration infrastructure.
+If you’re not ready to completely switch to a {{< param "PRODUCT_NAME" >}} configuration, you can run
+{{< param "PRODUCT_ROOT_NAME" >}} using your existing Grafana Agent Static configuration.
+The `--config.format=static` flag tells {{< param "PRODUCT_ROOT_NAME" >}} to convert your [Static]
+configuration to {{< param "PRODUCT_NAME" >}} and load it directly without saving the new
+configuration. This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your existing
+Grafana Agent Static configuration infrastructure.
-> In this task, we will use the [run][] CLI command to run Grafana Agent in Flow
-> mode using a Static mode configuration.
+> In this task, we will use the [run][] CLI command to run {{< param "PRODUCT_NAME" >}} using a Static configuration.
-[Start the Agent][] in Flow mode and include the command line flag
+[Start][] {{< param "PRODUCT_NAME" >}} and include the command line flag
`--config.format=static`. Your configuration file must be a valid [Static]
-mode configuration file.
+configuration file.
### Debugging
1. You can follow the convert CLI command [debugging][] instructions to generate
a diagnostic report.
-1. Refer to the Grafana Agent [Flow Debugging][] for more information about
- running Grafana Agent in Flow mode.
+1. Refer to the {{< param "PRODUCT_NAME" >}} [DebuggingUI][] for more information about
+ running {{< param "PRODUCT_NAME" >}}.
-1. If your [Static] mode configuration can't be converted and loaded directly into
- Grafana Agent, diagnostic information is sent to `stderr`. You can use the `
- --config.bypass-conversion-errors` flag with `--config.format=static` to bypass any
- non-critical issues and start the Agent.
+1. If your [Static] configuration can't be converted and loaded directly into
+ {{< param "PRODUCT_NAME" >}}, diagnostic information is sent to `stderr`. You can use the `
+ --config.bypass-conversion-errors` flag with `--config.format=static` to bypass any
+ non-critical issues and start {{< param "PRODUCT_NAME" >}}.
- {{% admonition type="caution" %}}If you bypass the errors, the behavior of the converted configuration may not match the original Grafana Agent Static mode configuration. Do not use this flag in a production environment.{{%/admonition %}}
+ {{% admonition type="caution" %}}
+ If you bypass the errors, the behavior of the converted configuration may not match the original Grafana Agent Static configuration. Do not use this flag in a production environment.
+ {{%/admonition %}}
## Example
-This example demonstrates converting a [Static] mode configuration file to a Flow mode configuration file.
+This example demonstrates converting a [Static] configuration file to a {{< param "PRODUCT_NAME" >}} configuration file.
-The following [Static] mode configuration file provides the input for the conversion:
+The following [Static] configuration file provides the input for the conversion:
```yaml
server:
@@ -216,7 +217,7 @@ grafana-agent-flow convert --source-format=static --output=OUTPUT_CONFIG_PATH IN
{{< /code >}}
-The new Flow mode configuration file looks like this:
+The new {{< param "PRODUCT_NAME" >}} configuration file looks like this:
```river
prometheus.scrape "metrics_test_local_agent" {
@@ -295,32 +296,32 @@ loki.write "logs_varlogs" {
## Limitations
-Configuration conversion is done on a best-effort basis. The Agent will issue
+Configuration conversion is done on a best-effort basis. {{< param "PRODUCT_ROOT_NAME" >}} will issue
warnings or errors where the conversion cannot be performed.
Once the configuration is converted, we recommend that you review
-the Flow mode configuration file, and verify that it is correct
+the {{< param "PRODUCT_NAME" >}} configuration file, and verify that it is correct
before starting to use it in a production environment.
Furthermore, we recommend that you review the following checklist:
-* The following configuration options are not available for conversion to Flow
- mode: [Integrations next][], [Traces][], and [Agent Management][]. Any
+* The following configuration options are not available for conversion to {{< param "PRODUCT_NAME" >}}:
+ [Integrations next][], [Traces][], and [Agent Management][]. Any
additional unsupported features are returned as errors during conversion.
-* There is no gRPC server to configure for Flow mode, so any non-default config
+* There is no gRPC server to configure for {{< param "PRODUCT_NAME" >}}, as any non-default configuration
will show as unsupported during the conversion.
-* Check if you are using any extra command line arguments with Static mode that
+* Check if you are using any extra command line arguments with Static that
are not present in your configuration file. For example, `-server.http.address`.
-* Check if you are using any environment variables in your [Static] mode configuration.
+* Check if you are using any environment variables in your [Static] configuration.
These will be evaluated during conversion and you may want to replace them
- with the Flow Standard library [env] function after conversion.
+ with the {{< param "PRODUCT_NAME" >}} Standard library [env] function after conversion.
* Review additional [Prometheus Limitations] for limitations specific to your
[Metrics] config.
* Review additional [Promtail Limitations] for limitations specific to your
[Logs] config.
-* The logs produced by Grafana Agent Flow mode will differ from those
- produced by Static mode.
-* Grafana Agent exposes the [Grafana Agent Flow UI][].
+* The logs produced by {{< param "PRODUCT_NAME" >}} mode will differ from those
+ produced by Static.
+* {{< param "PRODUCT_ROOT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI][].
[debugging]: #debugging
@@ -345,10 +346,10 @@ Furthermore, we recommend that you review the following checklist:
[convert]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/reference/cli/convert.md"
[run]: "/docs/agent/ -> /docs/agent//flow/reference/cli/run.md"
[run]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/reference/cli/run.md"
-[Start the agent]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md"
-[Start the agent]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md"
-[Flow Debugging]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging.md"
-[Flow Debugging]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging.md"
+[Start]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md"
+[Start]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md"
+[DebuggingUI]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging.md"
+[DebuggingUI]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging.md"
[River]: "/docs/agent/ -> /docs/agent//flow/config-language/"
[River]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/config-language/"
[Integrations next]: "/docs/agent/ -> /docs/agent//static/configuration/integrations/integrations-next/_index.md"
@@ -367,6 +368,6 @@ Furthermore, we recommend that you review the following checklist:
[Metrics]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/static/configuration/metrics-config.md"
[Logs]: "/docs/agent/ -> /docs/agent//static/configuration/logs-config.md"
[Logs]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/static/logs-config.md"
-[Grafana Agent Flow UI]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging#grafana-agent-flow-ui"
-[Grafana Agent Flow UI]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging#grafana-agent-flow-ui"
+[UI]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging#grafana-agent-flow-ui"
+[UI]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging#grafana-agent-flow-ui"
{{% /docs/reference %}}
diff --git a/docs/sources/flow/getting-started/opentelemetry-to-lgtm-stack.md b/docs/sources/flow/getting-started/opentelemetry-to-lgtm-stack.md
index ebab4c6e4d2d..fa41091a2f78 100644
--- a/docs/sources/flow/getting-started/opentelemetry-to-lgtm-stack.md
+++ b/docs/sources/flow/getting-started/opentelemetry-to-lgtm-stack.md
@@ -13,13 +13,13 @@ weight: 350
# OpenTelemetry to Grafana stack
-You can configure Grafana Agent Flow to collect [OpenTelemetry][]-compatible data and forward it to the Grafana stack
+You can configure {{< param "PRODUCT_NAME" >}} to collect [OpenTelemetry][]-compatible data and forward it to the Grafana stack
This topic describes how to:
-* Configure Grafana Agent to send your data to Loki
-* Configure Grafana Agent to send your data to Tempo
-* Configure Grafana Agent to send your data to Mimir or Prometheus Remote Write
+* Configure {{< param "PRODUCT_NAME" >}} to send your data to Loki
+* Configure {{< param "PRODUCT_NAME" >}} to send your data to Tempo
+* Configure {{< param "PRODUCT_NAME" >}} to send your data to Mimir or Prometheus Remote Write
## Components used in this topic
@@ -37,14 +37,14 @@ This topic describes how to:
* Ensure that you have basic familiarity with instrumenting applications with
OpenTelemetry.
* Have a set of OpenTelemetry applications ready to push telemetry data to
- Grafana Agent Flow.
-* Identify where Grafana Agent Flow will write received telemetry data.
-* Be familiar with the concept of [Components][] in Grafana Agent Flow.
+ {{< param "PRODUCT_NAME" >}}.
+* Identify where {{< param "PRODUCT_NAME" >}} will write received telemetry data.
+* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
* Complete the [Collect open telemetry data][] getting started guide. You will pick up from where that guide ended.
## The pipeline
-You can start with the Grafana Agent Flow configuration you created in the previous getting started guide:
+You can start with the {{< param "PRODUCT_NAME" >}} configuration you created in the previous getting started guide:
```river
otelcol.receiver.otlp "example" {
@@ -159,7 +159,7 @@ otelcol.auth.basic "grafana_cloud_tempo" {
## Grafana Mimir or Prometheus Remote Write
-[Prometheus Remote Write][] is a popular metrics transmission protocol supported by most metrics systems, including [Grafana Mimir][] and Grafana Cloud. To send from OTLP to Prometheus, we do a passthrough from the [otelcol.exporter.prometheus][] to the [prometheus.remote_write][] component. The Prometheus remote write component in Agent is a robust protocol implementation, including a Write Ahead Log for resiliency.
+[Prometheus Remote Write][] is a popular metrics transmission protocol supported by most metrics systems, including [Grafana Mimir][] and Grafana Cloud. To send from OTLP to Prometheus, we do a passthrough from the [otelcol.exporter.prometheus][] to the [prometheus.remote_write][] component. The Prometheus remote write component in {{< param "PRODUCT_NAME" >}} is a robust protocol implementation, including a Write Ahead Log (WAL) for resiliency.
```river
otelcol.exporter.prometheus "default" {
@@ -266,7 +266,7 @@ loki.write "grafana_cloud_loki" {
}
```
-Running the Agent now will give you the following:
+Running {{< param "PRODUCT_NAME" >}} now will give you the following:
```
AGENT_MODE=flow ./grafana-agent run agent-config.river
diff --git a/docs/sources/flow/monitoring/_index.md b/docs/sources/flow/monitoring/_index.md
index 184bd46b7a35..975db20e2031 100644
--- a/docs/sources/flow/monitoring/_index.md
+++ b/docs/sources/flow/monitoring/_index.md
@@ -7,11 +7,12 @@ aliases:
canonical: https://grafana.com/docs/agent/latest/flow/monitoring/
description: Learn about monitoring Grafana Agent Flow
title: Monitoring Grafana Agent Flow
+menuTitle: Monitoring
weight: 500
---
-# Monitoring Grafana Agent Flow
+# Monitoring {{< param "PRODUCT_NAME" >}}
-This section details various ways to monitor and debug Grafana Agent Flow.
+This section details various ways to monitor and debug {{< param "PRODUCT_NAME" >}}.
{{< section >}}
diff --git a/docs/sources/flow/monitoring/component_metrics.md b/docs/sources/flow/monitoring/component_metrics.md
index f85a0440c21c..33c9d5c02e98 100644
--- a/docs/sources/flow/monitoring/component_metrics.md
+++ b/docs/sources/flow/monitoring/component_metrics.md
@@ -13,7 +13,7 @@ weight: 200
# Component metrics
-Grafana Agent Flow [components][] may optionally expose Prometheus metrics
+{{< param "PRODUCT_NAME" >}} [components][] may optionally expose Prometheus metrics
which can be used to investigate the behavior of that component. These
component-specific metrics are only generated when an instance of that
component is running.
@@ -23,11 +23,11 @@ component is running.
> component for observability, alerting, and debugging.
Component-specific metrics are exposed at the `/metrics` HTTP endpoint of the
-Grafana Agent HTTP server, which defaults to listening on
+{{< param "PRODUCT_NAME" >}} HTTP server, which defaults to listening on
`http://localhost:12345`.
> The documentation for the [`grafana-agent run`][grafana-agent run] command describes how to
-> modify the address Grafana Agent listens on for HTTP traffic.
+> modify the address {{< param "PRODUCT_NAME" >}} listens on for HTTP traffic.
Component-specific metrics will have a `component_id` label matching the
component ID generating those metrics. For example, component-specific metrics
diff --git a/docs/sources/flow/monitoring/controller_metrics.md b/docs/sources/flow/monitoring/controller_metrics.md
index 7aa08f097ef0..a785909d91fe 100644
--- a/docs/sources/flow/monitoring/controller_metrics.md
+++ b/docs/sources/flow/monitoring/controller_metrics.md
@@ -13,15 +13,15 @@ weight: 100
# Controller metrics
-The Grafana Agent Flow [component controller][] exposes Prometheus metrics
+The {{< param "PRODUCT_NAME" >}} [component controller][] exposes Prometheus metrics
which can be used to investigate the controller state.
Metrics for the controller are exposed at the `/metrics` HTTP endpoint of the
-Grafana Agent HTTP server, which defaults to listening on
+{{< param "PRODUCT_NAME" >}} HTTP server, which defaults to listening on
`http://localhost:12345`.
> The documentation for the [`grafana-agent run`][grafana-agent run] command
-> describes how to modify the address Grafana Agent listens on for HTTP
+> describes how to modify the address {{< param "PRODUCT_NAME" >}} listens on for HTTP
> traffic.
The controller exposes the following metrics:
diff --git a/docs/sources/flow/monitoring/debugging.md b/docs/sources/flow/monitoring/debugging.md
index f5b634527e3a..eb1a87ef65d3 100644
--- a/docs/sources/flow/monitoring/debugging.md
+++ b/docs/sources/flow/monitoring/debugging.md
@@ -12,29 +12,29 @@ weight: 300
# Debugging
-Follow these steps to debug issues with Grafana Agent Flow:
+Follow these steps to debug issues with {{< param "PRODUCT_NAME" >}}:
-1. Use the Grafana Agent Flow UI to debug issues.
-2. If the UI doesn't help with debugging an issue, logs can be examined
+1. Use the {{< param "PRODUCT_NAME" >}} UI to debug issues.
+2. If the {{< param "PRODUCT_NAME" >}} UI doesn't help with debugging an issue, logs can be examined
instead.
-## Grafana Agent Flow UI
+## {{< param "PRODUCT_NAME" >}} UI
-Grafana Agent Flow includes an embedded UI viewable from Grafana Agent's HTTP
+{{< param "PRODUCT_NAME" >}} includes an embedded UI viewable from the {{< param "PRODUCT_ROOT_NAME" >}} HTTP
server, which defaults to listening at `http://localhost:12345`.
-> **NOTE**: For security reasons, installations of Grafana Agent Flow on
+> **NOTE**: For security reasons, installations of {{< param "PRODUCT_NAME" >}} on
> non-containerized platforms default to listening on `localhost`. default
> prevents other machines on the network from being able to view the UI.
>
> To expose the UI to other machines on the network on non-containerized
> platforms, refer to the documentation for how you [installed][install]
-> Grafana Agent Flow.
+> {{< param "PRODUCT_NAME" >}}.
>
-> If you are running a custom installation of Grafana Agent Flow, refer to the
+> If you are running a custom installation of {{< param "PRODUCT_NAME" >}}, refer to the
> documentation for [the `grafana-agent run` command][grafana-agent run] to
> learn how to change the HTTP listen address, and pass the appropriate flag
-> when running Grafana Agent Flow.
+> when running {{< param "PRODUCT_NAME" >}}.
### Home page
@@ -46,7 +46,7 @@ their health.
Click **View** on a row in the table to navigate to the [Component detail page](#component-detail-page)
for that component.
-Click the Grafana Agent logo to navigate back to the home page.
+Click the {{< param "PRODUCT_ROOT_NAME" >}} logo to navigate back to the home page.
### Graph page
@@ -91,13 +91,13 @@ To debug using the UI:
## Examining logs
-Logs may also help debug issues with Grafana Agent Flow.
+Logs may also help debug issues with {{< param "PRODUCT_NAME" >}}.
To reduce logging noise, many components hide debugging info behind debug-level
log lines. It is recommended that you configure the [`logging` block][logging]
-to show debug-level log lines when debugging issues with Grafana Agent Flow.
+to show debug-level log lines when debugging issues with {{< param "PRODUCT_NAME" >}}.
-The location of Grafana Agent's logs is different based on how it is deployed.
+The location of {{< param "PRODUCT_NAME" >}} logs is different based on how it is deployed.
Refer to the [`logging` block][logging] page to see how to find logs for your
system.
@@ -124,7 +124,7 @@ check the logs for any reported name conflict events.
- **Node stuck in terminating state**: The node attempted to gracefully shut
down and set its state to Terminating, but it has not completely gone away. Check
the clustering page to view the state of the peers and verify that the
-terminating Agent has been shut down.
+terminating {{< param "PRODUCT_ROOT_NAME" >}} has been shut down.
{{% docs/reference %}}
[logging]: "/docs/agent/ -> /docs/agent//flow/reference/config-blocks/logging.md"
diff --git a/docs/sources/flow/reference/_index.md b/docs/sources/flow/reference/_index.md
index 65ca0dac4a64..8e8f31fc645b 100644
--- a/docs/sources/flow/reference/_index.md
+++ b/docs/sources/flow/reference/_index.md
@@ -11,9 +11,8 @@ title: Grafana Agent Flow Reference
weight: 600
---
-# Grafana Agent Flow Reference
+# {{< param "PRODUCT_NAME" >}} Reference
-This section provides reference-level documentation for the various parts of
-Grafana Agent Flow:
+This section provides reference-level documentation for the various parts of {{< param "PRODUCT_NAME" >}}:
{{< section >}}
diff --git a/docs/sources/flow/reference/cli/_index.md b/docs/sources/flow/reference/cli/_index.md
index 556ee99f96d2..55fa9d9197e1 100644
--- a/docs/sources/flow/reference/cli/_index.md
+++ b/docs/sources/flow/reference/cli/_index.md
@@ -11,19 +11,19 @@ title: The Grafana Agent command-line interface
weight: 100
---
-# The Grafana Agent command-line interface
+# The {{< param "PRODUCT_ROOT_NAME" >}} command-line interface
When in Flow mode, the `grafana-agent` binary exposes a command-line interface with
subcommands to perform various operations.
-The most common subcommand is [`run`][run] which accepts a config file and
-starts Grafana Agent Flow.
+The most common subcommand is [`run`][run] which accepts a configuration file and
+starts {{< param "PRODUCT_NAME" >}}.
Available commands:
-* [`convert`][convert]: Convert a Grafana Agent configuration file.
-* [`fmt`][fmt]: Format a Grafana Agent Flow configuration file.
-* [`run`][run]: Start Grafana Agent Flow, given a configuration file.
+* [`convert`][convert]: Convert a {{< param "PRODUCT_ROOT_NAME" >}} configuration file.
+* [`fmt`][fmt]: Format a {{< param "PRODUCT_NAME" >}} configuration file.
+* [`run`][run]: Start {{< param "PRODUCT_NAME" >}}, given a configuration file.
* [`tools`][tools]: Read the WAL and provide statistical information.
* `completion`: Generate shell completion for the `grafana-agent-flow` CLI.
* `help`: Print help for supported commands.
diff --git a/docs/sources/flow/reference/cli/convert.md b/docs/sources/flow/reference/cli/convert.md
index 483f21da6d29..a38b63f7fdb1 100644
--- a/docs/sources/flow/reference/cli/convert.md
+++ b/docs/sources/flow/reference/cli/convert.md
@@ -15,7 +15,7 @@ weight: 100
# The convert command
-The `convert` command converts a supported configuration format to Grafana Agent Flow River format.
+The `convert` command converts a supported configuration format to {{< param "PRODUCT_NAME" >}} River format.
## Usage
@@ -27,16 +27,16 @@ Usage:
Replace the following:
* `FLAG`: One or more flags that define the input and output of the command.
- * `FILE_NAME`: The Grafana Agent configuration file.
+ * `FILE_NAME`: The {{< param "PRODUCT_ROOT_NAME" >}} configuration file.
-If the `FILE_NAME` argument is not provided or if the `FILE_NAME` argument is
+If the `FILE_NAME` argument isn't provided or if the `FILE_NAME` argument is
equal to `-`, `convert` converts the contents of standard input. Otherwise,
`convert` reads and converts the file from disk specified by the argument.
-There are several different flags available for the `convert` command. You can use the `--output` flag to write the contents of the converted config to a specified path. You can use the `--report` flag to generate a diagnostic report. The `--bypass-errors` flag allows you to bypass any [errors] generated during the file conversion.
+There are several different flags available for the `convert` command. You can use the `--output` flag to write the contents of the converted configuration to a specified path. You can use the `--report` flag to generate a diagnostic report. The `--bypass-errors` flag allows you to bypass any [errors] generated during the file conversion.
-The command fails if the source config has syntactically incorrect
-configuration or cannot be converted to Grafana Agent Flow River format.
+The command fails if the source configuration has syntactically incorrect
+configuration or can't be converted to {{< param "PRODUCT_NAME" >}} River format.
The following flags are supported:
@@ -55,10 +55,10 @@ The following flags are supported:
### Defaults
-Flow Defaults are managed as follows:
-* If a provided source config value matches a Flow default value, the property is left off the Flow output.
-* If a non-provided source config value default matches a Flow default value, the property is left off the Flow output.
-* If a non-provided source config value default doesn't match a Flow default value, the Flow default value is included in the Flow output.
+{{< param "PRODUCT_NAME" >}} defaults are managed as follows:
+* If a provided source configuration value matches a {{< param "PRODUCT_NAME" >}} default value, the property is left off the output.
+* If a non-provided source configuration value default matches a {{< param "PRODUCT_NAME" >}} default value, the property is left off the output.
+* If a non-provided source configuration value default doesn't match a {{< param "PRODUCT_NAME" >}} default value, the default value is included in the output.
### Errors
@@ -70,38 +70,38 @@ where an output can still be generated. These can be bypassed using the
Using the `--source-format=prometheus` will convert the source config from
[Prometheus v2.45](https://prometheus.io/docs/prometheus/2.45/configuration/configuration/)
-to Grafana Agent Flow config.
+to {{< param "PRODUCT_NAME" >}} configuration.
This includes Prometheus features such as
-[scrape_config](https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#scrape_config),
+[scrape_config](https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#scrape_config),
[relabel_config](https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#relabel_config),
[metric_relabel_configs](https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#metric_relabel_configs),
[remote_write](https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#remote_write),
-and many supported *_sd_configs. Unsupported features in a source config result
+and many supported *_sd_configs. Unsupported features in a source configuration result
in [errors].
-Refer to [Migrate from Prometheus to Grafana Agent Flow]({{< relref "../../getting-started/migrating-from-prometheus/" >}}) for a detailed migration guide.
+Refer to [Migrate from Prometheus to {{< param "PRODUCT_NAME" >}}]({{< relref "../../getting-started/migrating-from-prometheus/" >}}) for a detailed migration guide.
### Promtail
Using the `--source-format=promtail` will convert the source configuration from
[Promtail v2.8.x](/docs/loki/v2.8.x/clients/promtail/)
-to Grafana Agent Flow configuration.
+to {{< param "PRODUCT_NAME" >}} configuration.
Nearly all [Promtail features](/docs/loki/v2.8.x/clients/promtail/configuration/)
-are supported and can be converted to Grafana Agent Flow config.
+are supported and can be converted to {{< param "PRODUCT_NAME" >}} configuration.
If you have unsupported features in a source configuration, you will receive [errors] when you convert to a flow configuration. The converter will
also raise warnings for configuration options that may require your attention.
-Refer to [Migrate from Promtail to Grafana Agent Flow]({{< relref "../../getting-started/migrating-from-promtail/" >}}) for a detailed migration guide.
+Refer to [Migrate from Promtail to {{< param "PRODUCT_NAME" >}}]({{< relref "../../getting-started/migrating-from-promtail/" >}}) for a detailed migration guide.
### Static
-Using the `--source-format=static` will convert the source configuration from
-Grafana Agent [Static]({{< relref "../../../static" >}}) mode to Flow mode configuration.
+Using the `--source-format=static` will convert the source configuration from a
+[Grafana Agent Static]({{< relref "../../../static" >}}) configuration to a {{< param "PRODUCT_NAME" >}} configuration.
If you have unsupported features in a Static mode source configuration, you will receive [errors][] when you convert to a Flow mode configuration. The converter will
also raise warnings for configuration options that may require your attention.
-Refer to [Migrate Grafana Agent from Static mode to Flow mode]({{< relref "../../getting-started/migrating-from-static/" >}}) for a detailed migration guide.
\ No newline at end of file
+Refer to [Migrate from Grafana Agent Static to {{< param "PRODUCT_NAME" >}}]({{< relref "../../getting-started/migrating-from-static/" >}}) for a detailed migration guide.
\ No newline at end of file
diff --git a/docs/sources/flow/reference/cli/fmt.md b/docs/sources/flow/reference/cli/fmt.md
index 0eb7d8635636..7a266921d365 100644
--- a/docs/sources/flow/reference/cli/fmt.md
+++ b/docs/sources/flow/reference/cli/fmt.md
@@ -13,7 +13,7 @@ weight: 200
# The fmt command
-The `fmt` command formats a given Grafana Agent Flow configuration file.
+The `fmt` command formats a given {{< param "PRODUCT_NAME" >}} configuration file.
## Usage
@@ -25,7 +25,7 @@ Usage:
Replace the following:
* `FLAG`: One or more flags that define the input and output of the command.
- * `FILE_NAME`: The Grafana Agent configuration file.
+ * `FILE_NAME`: The {{< param "PRODUCT_NAME" >}} configuration file.
If the `FILE_NAME` argument is not provided or if the `FILE_NAME` argument is
equal to `-`, `fmt` formats the contents of standard input. Otherwise,
@@ -42,4 +42,4 @@ properly.
The following flags are supported:
* `--write`, `-w`: Write the formatted file back to disk when not reading from
- standard input.
\ No newline at end of file
+ standard input.
diff --git a/docs/sources/flow/reference/cli/run.md b/docs/sources/flow/reference/cli/run.md
index 7eaf1285f916..e65ae0e3d5b0 100644
--- a/docs/sources/flow/reference/cli/run.md
+++ b/docs/sources/flow/reference/cli/run.md
@@ -13,8 +13,7 @@ weight: 300
# The run command
-The `run` command runs Grafana Agent Flow in the foreground until an
-interrupt is received.
+The `run` command runs {{< param "PRODUCT_NAME" >}} in the foreground until an interrupt is received.
## Usage
@@ -26,18 +25,18 @@ Usage:
Replace the following:
* `FLAG`: One or more flags that define the input and output of the command.
- * `PATH_NAME`: Required. The Grafana Agent configuration file/directory path.
+ * `PATH_NAME`: Required. The {{< param "PRODUCT_NAME" >}} configuration file/directory path.
-If the `PATH_NAME` argument is not provided, or if the configuration path can't be loaded or
+If the `PATH_NAME` argument is not provided, or if the configuration path can't be loaded or
contains errors during the initial load, the `run` command will immediately exit and show an error message.
-If you give the `PATH_NAME` argument a directory path, the agent will find `*.river` files
+If you give the `PATH_NAME` argument a directory path, {{< param "PRODUCT_NAME" >}} will find `*.river` files
(ignoring nested directories) and load them as a single configuration source. However, component names must
be **unique** across all River files, and configuration blocks must not be repeated.
-Grafana Agent Flow will continue to run if subsequent reloads of the configuration
+{{< param "PRODUCT_NAME" >}} will continue to run if subsequent reloads of the configuration
file fail, potentially marking components as unhealthy depending on the nature
-of the failure. When this happens, Grafana Agent Flow will continue functioning
+of the failure. When this happens, {{< param "PRODUCT_NAME" >}} will continue functioning
in the last valid state.
`run` launches an HTTP server that exposes metrics about itself and its
@@ -53,7 +52,7 @@ The following flags are supported:
* `--server.http.ui-path-prefix`: Base path where the UI is exposed (default `/`).
* `--storage.path`: Base directory where components can store data (default `data-agent/`).
* `--disable-reporting`: Disable [data collection][] (default `false`).
-* `--cluster.enabled`: Start the Agent in clustered mode (default `false`).
+* `--cluster.enabled`: Start {{< param "PRODUCT_NAME" >}} in clustered mode (default `false`).
* `--cluster.node-name`: The name to use for this node (defaults to the environment's hostname).
* `--cluster.join-addresses`: Comma-separated list of addresses to join the cluster at (default `""`). Mutually exclusive with `--cluster.discover-peers`.
* `--cluster.discover-peers`: List of key-value tuples for discovering peers (default `""`). Mutually exclusive with `--cluster.join-addresses`.
@@ -74,7 +73,7 @@ The following flags are supported:
The configuration file can be reloaded from disk by either:
* Sending an HTTP POST request to the `/-/reload` endpoint.
-* Sending a `SIGHUP` signal to the Grafana Agent process.
+* Sending a `SIGHUP` signal to the {{< param "PRODUCT_NAME" >}} process.
When this happens, the [component controller][] synchronizes the set of running
components with the latest set of components specified in the configuration file.
@@ -89,7 +88,7 @@ reloading.
## Clustering (beta)
-The `--cluster.enabled` command-line argument starts Grafana Agent in
+The `--cluster.enabled` command-line argument starts {{< param "PRODUCT_ROOT_NAME" >}} in
[clustering][] mode. The rest of the `--cluster.*` command-line flags can be
used to configure how nodes discover and connect to one another.
@@ -97,16 +96,16 @@ Each cluster member’s name must be unique within the cluster. Nodes which try
to join with a conflicting name are rejected and will fall back to
bootstrapping a new cluster of their own.
-Peers communicate over HTTP/2 on the agent's built-in HTTP server. Each node
+Peers communicate over HTTP/2 on the built-in HTTP server. Each node
must be configured to accept connections on `--server.http.listen-addr` and the
address defined or inferred in `--cluster.advertise-address`.
-If the `--cluster.advertise-address` flag is not explicitly set, the agent
+If the `--cluster.advertise-address` flag isn't explicitly set, {{< param "PRODUCT_NAME" >}}
tries to infer a suitable one from `--cluster.advertise-interfaces`.
-If `--cluster.advertise-interfaces` is not explicitly set, the agent will
+If `--cluster.advertise-interfaces` isn't explicitly set, {{< param "PRODUCT_NAME" >}} will
infer one from the `eth0` and `en0` local network interfaces.
-The agent will fail to start if it can't determine the advertised address.
-Since Windows does not use the interface names `eth0` or `en0`, Windows users must explicitly pass
+{{< param "PRODUCT_NAME" >}} will fail to start if it can't determine the advertised address.
+Since Windows doesn't use the interface names `eth0` or `en0`, Windows users must explicitly pass
at least one valid network interface for `--cluster.advertise-interfaces` or a value for `--cluster.advertise-address`.
The comma-separated list of addresses provided in `--cluster.join-addresses`
@@ -145,10 +144,10 @@ The first node that is used to bootstrap a new cluster (also known as
the "seed node") can either omit the flags that specify peers to join or can
try to connect to itself.
-To join or rejoin a cluster, the agent will try to connect to a certain number of peers limited by the `--cluster.max-join-peers` flag.
+To join or rejoin a cluster, {{< param "PRODUCT_NAME" >}} will try to connect to a certain number of peers limited by the `--cluster.max-join-peers` flag.
This flag can be useful for clusters of significant sizes because connecting to a high number of peers can be an expensive operation.
To disable this behavior, set the `--cluster.max-join-peers` flag to 0.
-If the value of `--cluster.max-join-peers` is higher than the number of peers discovered, the agent will connect to all of them.
+If the value of `--cluster.max-join-peers` is higher than the number of peers discovered, {{< param "PRODUCT_NAME" >}} will connect to all of them.
The `--cluster.name` flag can be used to prevent clusters from accidentally merging.
When `--cluster.name` is provided, nodes will only join peers who share the same cluster name value.
@@ -157,36 +156,32 @@ Attempting to join a cluster with a wrong `--cluster.name` will result in a "fai
### Clustering states
-Clustered agents are in one of three states:
+Clustered {{< param "PRODUCT_ROOT_NAME" >}}s are in one of three states:
-* **Viewer**: The agent has a read-only view of the cluster and is not
- participating in workload distribution.
+* **Viewer**: {{< param "PRODUCT_NAME" >}} has a read-only view of the cluster and isn't participating in workload distribution.
-* **Participant**: The agent is participating in workload distribution for
- components that have clustering enabled.
+* **Participant**: {{< param "PRODUCT_NAME" >}} is participating in workload distribution for components that have clustering enabled.
-* **Terminating**: The agent is shutting down and will no longer assign new
- work to itself.
+* **Terminating**: {{< param "PRODUCT_NAME" >}} is shutting down and will no longer assign new work to itself.
-Agents initially join the cluster in the viewer state and then transition to
-the participant state after the process startup completes. Agents then
-transition to the terminating state when shutting down.
+Each {{< param "PRODUCT_ROOT_NAME" >}} initially joins the cluster in the viewer state and then transitions to
+the participant state after the process startup completes. Each {{< param "PRODUCT_ROOT_NAME" >}} then
+transitions to the terminating state when shutting down.
-The current state of a clustered agent is shown on the clustering page in the
-[UI][].
+The current state of a clustered {{< param "PRODUCT_ROOT_NAME" >}} is shown on the clustering page in the [UI][].
[UI]: {{< relref "../../monitoring/debugging.md#clustering-page" >}}
## Configuration conversion (beta)
When you use the `--config.format` command-line argument with a value
-other than `flow`, Grafana Agent converts the configuration file from
+other than `flow`, {{< param "PRODUCT_ROOT_NAME" >}} converts the configuration file from
the source format to River and immediately starts running with the new
configuration. This conversion uses the converter API described in the
[grafana-agent-flow convert][] docs.
If you also use the `--config.bypass-conversion-errors` command-line argument,
-Grafana Agent will ignore any errors from the converter. Use this argument
+{{< param "PRODUCT_NAME" >}} will ignore any errors from the converter. Use this argument
with caution because the resulting conversion may not be equivalent to the
original configuration.
diff --git a/docs/sources/flow/reference/cli/tools.md b/docs/sources/flow/reference/cli/tools.md
index 5ee0409f084a..b45e7f215a23 100644
--- a/docs/sources/flow/reference/cli/tools.md
+++ b/docs/sources/flow/reference/cli/tools.md
@@ -24,7 +24,7 @@ guarantees and may change or be removed between releases.
### prometheus.remote_write sample-stats
-Usage:
+Usage:
* `AGENT_MODE=flow grafana-agent tools prometheus.remote_write sample-stats [FLAG ...] WAL_DIRECTORY`
* `grafana-agent-flow tools prometheus.remote_write sample-stats [FLAG ...] WAL_DIRECTORY`
@@ -47,7 +47,7 @@ The following flag is supported:
### prometheus.remote_write target-stats
-Usage:
+Usage:
* `AGENT_MODE=flow grafana-agent tools prometheus.remote_write target-stats --job JOB --instance INSTANCE WAL_DIRECTORY`
* `grafana-agent-flow tools prometheus.remote_write target-stats --job JOB --instance INSTANCE WAL_DIRECTORY`
diff --git a/docs/sources/flow/reference/components/_index.md b/docs/sources/flow/reference/components/_index.md
index 74d21678c179..3eafecb3c1af 100644
--- a/docs/sources/flow/reference/components/_index.md
+++ b/docs/sources/flow/reference/components/_index.md
@@ -5,15 +5,14 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/
- /docs/grafana-cloud/send-data/agent/flow/reference/components/
canonical: https://grafana.com/docs/agent/latest/flow/reference/components/
-description: Learn about the compenets in Grafana Agent
+description: Learn about the components in Grafana Agent Flow
title: Components reference
weight: 300
---
# Components reference
-This section contains reference documentation for all recognized
-[components][].
+This section contains reference documentation for all recognized [components][].
{{< section >}}
diff --git a/docs/sources/flow/reference/components/discovery.consul.md b/docs/sources/flow/reference/components/discovery.consul.md
index 884fa1fe602f..d963db1495af 100644
--- a/docs/sources/flow/reference/components/discovery.consul.md
+++ b/docs/sources/flow/reference/components/discovery.consul.md
@@ -51,7 +51,7 @@ Name | Type | Description | Default | Required
At most one of the following can be provided:
- [`bearer_token` argument](#arguments).
- - [`bearer_token_file` argument](#arguments).
+ - [`bearer_token_file` argument](#arguments).
- [`basic_auth` block][basic_auth].
- [`authorization` block][authorization].
- [`oauth2` block][oauth2].
diff --git a/docs/sources/flow/reference/components/discovery.digitalocean.md b/docs/sources/flow/reference/components/discovery.digitalocean.md
index 18b42714b421..402424dde5d5 100644
--- a/docs/sources/flow/reference/components/discovery.digitalocean.md
+++ b/docs/sources/flow/reference/components/discovery.digitalocean.md
@@ -46,8 +46,7 @@ Exactly one of the [`bearer_token`](#arguments) and [`bearer_token_file`](#argum
[arguments]: #arguments
## Blocks
-The `discovery.digitalocean` component does not support any blocks, and is configured
-fully through arguments.
+The `discovery.digitalocean` component does not support any blocks, and is configured fully through arguments.
## Exported fields
diff --git a/docs/sources/flow/reference/components/discovery.docker.md b/docs/sources/flow/reference/components/discovery.docker.md
index 076f00f75b21..a9c0fb4f5855 100644
--- a/docs/sources/flow/reference/components/discovery.docker.md
+++ b/docs/sources/flow/reference/components/discovery.docker.md
@@ -41,7 +41,7 @@ Name | Type | Description | Default | Required
At most one of the following can be provided:
- [`bearer_token` argument](#arguments).
- - [`bearer_token_file` argument](#arguments).
+ - [`bearer_token_file` argument](#arguments).
- [`basic_auth` block][basic_auth].
- [`authorization` block][authorization].
- [`oauth2` block][oauth2].
diff --git a/docs/sources/flow/reference/components/discovery.file.md b/docs/sources/flow/reference/components/discovery.file.md
index 402406ee32fd..2abcf29b64b1 100644
--- a/docs/sources/flow/reference/components/discovery.file.md
+++ b/docs/sources/flow/reference/components/discovery.file.md
@@ -11,7 +11,7 @@ title: discovery.file
# discovery.file
-> **NOTE:** In `v0.35.0` of the Grafana Agent, the `discovery.file` component was renamed to [local.file_match][],
+> **NOTE:** In {{< param "PRODUCT_ROOT_NAME" >}} `v0.35.0`, the `discovery.file` component was renamed to [local.file_match][],
> and `discovery.file` was repurposed to discover scrape targets from one or more files.
>
>
diff --git a/docs/sources/flow/reference/components/discovery.kubernetes.md b/docs/sources/flow/reference/components/discovery.kubernetes.md
index 5b8cd870af6e..f21d1936fc68 100644
--- a/docs/sources/flow/reference/components/discovery.kubernetes.md
+++ b/docs/sources/flow/reference/components/discovery.kubernetes.md
@@ -16,7 +16,7 @@ resources. It watches cluster state, and ensures targets are continually synced
with what is currently running in your cluster.
If you supply no connection information, this component defaults to an
-in-cluster config. A kubeconfig file or manual connection settings can be used
+in-cluster configuration. A kubeconfig file or manual connection settings can be used
to override the defaults.
## Usage
@@ -44,7 +44,7 @@ Name | Type | Description | Default | Required
At most one of the following can be provided:
- [`bearer_token` argument](#arguments).
- - [`bearer_token_file` argument](#arguments).
+ - [`bearer_token_file` argument](#arguments).
- [`basic_auth` block][basic_auth].
- [`authorization` block][authorization].
- [`oauth2` block][oauth2].
@@ -279,7 +279,7 @@ omitted, all namespaces are searched.
Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
-`own_namespace` | `bool` | Include the namespace the agent is running in. | | no
+`own_namespace` | `bool` | Include the namespace {{< param "PRODUCT_NAME" >}} is running in. | | no
`names` | `list(string)` | List of namespaces to search. | | no
### selectors block
@@ -462,7 +462,7 @@ Replace the following:
### Limit to only pods on the same node
-This example limits the search to pods on the same node as this Grafana Agent. This configuration could be useful if you are running the Agent as a DaemonSet:
+This example limits the search to pods on the same node as this {{< param "PRODUCT_ROOT_NAME" >}}. This configuration could be useful if you are running {{< param "PRODUCT_ROOT_NAME" >}} as a DaemonSet:
```river
discovery.kubernetes "k8s_pods" {
diff --git a/docs/sources/flow/reference/components/loki.source.api.md b/docs/sources/flow/reference/components/loki.source.api.md
index 966589bd64a1..f524382f8a2c 100644
--- a/docs/sources/flow/reference/components/loki.source.api.md
+++ b/docs/sources/flow/reference/components/loki.source.api.md
@@ -24,7 +24,7 @@ The HTTP API exposed is compatible with [Loki push API][loki-push-api] and the `
loki.source.api "LABEL" {
http {
listen_address = "LISTEN_ADDRESS"
- listen_port = PORT
+ listen_port = PORT
}
forward_to = RECEIVER_LIST
}
@@ -32,10 +32,10 @@ loki.source.api "LABEL" {
The component will start HTTP server on the configured port and address with the following endpoints:
-- `/loki/api/v1/push` - accepting `POST` requests compatible with [Loki push API][loki-push-api], for example, from another Grafana Agent's [`loki.write`][loki.write] component.
-- `/loki/api/v1/raw` - accepting `POST` requests with newline-delimited log lines in body. This can be used to send NDJSON or plaintext logs. This is compatible with promtail's push API endpoint - see [promtail's documentation][promtail-push-api] for more information. NOTE: when this endpoint is used, the incoming timestamps cannot be used and the `use_incoming_timestamp = true` setting will be ignored.
+- `/loki/api/v1/push` - accepting `POST` requests compatible with [Loki push API][loki-push-api], for example, from another {{< param "PRODUCT_ROOT_NAME" >}}'s [`loki.write`][loki.write] component.
+- `/loki/api/v1/raw` - accepting `POST` requests with newline-delimited log lines in body. This can be used to send NDJSON or plaintext logs. This is compatible with promtail's push API endpoint - see [promtail's documentation][promtail-push-api] for more information. NOTE: when this endpoint is used, the incoming timestamps cannot be used and the `use_incoming_timestamp = true` setting will be ignored.
- `/loki/ready` - accepting `GET` requests - can be used to confirm the server is reachable and healthy.
-- `/api/v1/push` - internally reroutes to `/loki/api/v1/push`
+- `/api/v1/push` - internally reroutes to `/loki/api/v1/push`
- `/api/v1/raw` - internally reroutes to `/loki/api/v1/raw`
@@ -45,12 +45,12 @@ The component will start HTTP server on the configured port and address with the
`loki.source.api` supports the following arguments:
- Name | Type | Description | Default | Required
---------------------------|----------------------|------------------------------------------------------------|---------|----------
- `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes
- `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from request. | `false` | no
- `labels` | `map(string)` | The labels to associate with each received logs record. | `{}` | no
- `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no
+Name | Type | Description | Default | Required
+-------------------------|----------------------|------------------------------------------------------------|---------|---------
+`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes
+`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from request. | `false` | no
+`labels` | `map(string)` | The labels to associate with each received logs record. | `{}` | no
+`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no
The `relabel_rules` field can make use of the `rules` export value from a
[`loki.relabel`][loki.relabel] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`.
@@ -61,9 +61,9 @@ The `relabel_rules` field can make use of the `rules` export value from a
The following blocks are supported inside the definition of `loki.source.api`:
- Hierarchy | Name | Description | Required
------------|----------|----------------------------------------------------|----------
- `http` | [http][] | Configures the HTTP server that receives requests. | no
+Hierarchy | Name | Description | Required
+----------|----------|----------------------------------------------------|---------
+`http` | [http][] | Configures the HTTP server that receives requests. | no
[http]: #http
diff --git a/docs/sources/flow/reference/components/loki.source.kubernetes.md b/docs/sources/flow/reference/components/loki.source.kubernetes.md
index cde01d3172bc..71baaa939633 100644
--- a/docs/sources/flow/reference/components/loki.source.kubernetes.md
+++ b/docs/sources/flow/reference/components/loki.source.kubernetes.md
@@ -21,7 +21,7 @@ Kubernetes API. It has the following benefits over `loki.source.file`:
* It works without a privileged container.
* It works without a root user.
* It works without needing access to the filesystem of the Kubernetes node.
-* It doesn't require a DaemonSet to collect logs, so one agent could collect
+* It doesn't require a DaemonSet to collect logs, so one {{< param "PRODUCT_ROOT_NAME" >}} could collect
logs for the whole cluster.
> **NOTE**: Because `loki.source.kubernetes` uses the Kubernetes API to tail
@@ -83,7 +83,7 @@ client > authorization | [authorization][] | Configure generic authorization to
client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no
client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no
client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no
-clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no
+clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_NAME" >}} is running in clustered mode. | no
The `>` symbol indicates deeper levels of nesting. For example, `client >
basic_auth` refers to a `basic_auth` block defined
@@ -100,7 +100,7 @@ inside a `client` block.
The `client` block configures the Kubernetes client used to tail logs from
containers. If the `client` block isn't provided, the default in-cluster
-configuration with the service account of the running Grafana Agent pod is
+configuration with the service account of the running {{< param "PRODUCT_ROOT_NAME" >}} pod is
used.
The following arguments are supported:
@@ -144,11 +144,11 @@ Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
`enabled` | `bool` | Distribute log collection with other cluster nodes. | | yes
-When the agent is [using clustering][], and `enabled` is set to true, then this
+When {{< param "PRODUCT_ROOT_NAME" >}} is [using clustering][], and `enabled` is set to true, then this
`loki.source.kubernetes` component instance opts-in to participating in the
cluster to distribute the load of log collection between all cluster nodes.
-If the agent is _not_ running in clustered mode, then the block is a no-op and
+If {{< param "PRODUCT_ROOT_NAME" >}} is _not_ running in clustered mode, then the block is a no-op and
`loki.source.kubernetes` collects logs from every target it receives in its
arguments.
diff --git a/docs/sources/flow/reference/components/loki.source.kubernetes_events.md b/docs/sources/flow/reference/components/loki.source.kubernetes_events.md
index 9e7df1f037d9..502b6de37361 100644
--- a/docs/sources/flow/reference/components/loki.source.kubernetes_events.md
+++ b/docs/sources/flow/reference/components/loki.source.kubernetes_events.md
@@ -47,9 +47,9 @@ By default, the generated log lines will be in the `logfmt` format. Use the
`log_format` argument to change it to `json`. These formats are also names of
LogQL parsers, which can be used for processing the logs.
-> **NOTE**: When watching all namespaces, Grafana Agent must have permissions
+> **NOTE**: When watching all namespaces, {{< param "PRODUCT_NAME" >}} must have permissions
> to watch events at the cluster scope (such as using a ClusterRoleBinding). If
-> an explicit list of namespaces is provided, Grafana Agent only needs
+> an explicit list of namespaces is provided, {{< param "PRODUCT_NAME" >}} only needs
> permissions to watch events for those namespaces.
Log lines generated by `loki.source.kubernetes_events` have the following
@@ -96,7 +96,7 @@ inside a `client` block.
The `client` block configures the Kubernetes client used to tail logs from
containers. If the `client` block isn't provided, the default in-cluster
-configuration with the service account of the running Grafana Agent pod is
+configuration with the service account of the running {{< param "PRODUCT_ROOT_NAME" >}} pod is
used.
The following arguments are supported:
diff --git a/docs/sources/flow/reference/components/loki.source.podlogs.md b/docs/sources/flow/reference/components/loki.source.podlogs.md
index 9fd5ad109dcd..e3029a7ea894 100644
--- a/docs/sources/flow/reference/components/loki.source.podlogs.md
+++ b/docs/sources/flow/reference/components/loki.source.podlogs.md
@@ -23,8 +23,8 @@ the discovered them.
resources rather than being fed targets from another Flow component.
> **NOTE**: Unlike `loki.source.kubernetes`, it is not possible to distribute
-> responsibility of collecting logs across multiple agents. To avoid collecting
-> duplicate logs, only one agent should be running a `loki.source.podlogs`
+> responsibility of collecting logs across multiple {{< param "PRODUCT_ROOT_NAME" >}}s. To avoid collecting
+> duplicate logs, only one {{< param "PRODUCT_ROOT_NAME" >}} should be running a `loki.source.podlogs`
> component.
> **NOTE**: Because `loki.source.podlogs` uses the Kubernetes API to tail logs,
@@ -62,7 +62,7 @@ The `PodLogs` resource describes a set of Pods to collect logs from.
> **NOTE**: `loki.source.podlogs` looks for `PodLogs` of
> `monitoring.grafana.com/v1alpha2`, and is not compatible with `PodLogs` from
-> the Grafana Agent Operator, which are version `v1alpha1`.
+> the {{< param "PRODUCT_ROOT_NAME" >}} Operator, which are version `v1alpha1`.
Field | Type | Description
----- | ---- | -----------
@@ -144,7 +144,7 @@ selector | [selector][] | Label selector for which `PodLogs` to discover. | no
selector > match_expression | [match_expression][] | Label selector expression for which `PodLogs` to discover. | no
namespace_selector | [selector][] | Label selector for which namespaces to discover `PodLogs` in. | no
namespace_selector > match_expression | [match_expression][] | Label selector expression for which namespaces to discover `PodLogs` in. | no
-clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no
+clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_ROOT_NAME" >}} is running in clustered mode. | no
The `>` symbol indicates deeper levels of nesting. For example, `client >
basic_auth` refers to a `basic_auth` block defined
@@ -163,7 +163,7 @@ inside a `client` block.
The `client` block configures the Kubernetes client used to tail logs from
containers. If the `client` block isn't provided, the default in-cluster
-configuration with the service account of the running Grafana Agent pod is
+configuration with the service account of the running {{< param "PRODUCT_ROOT_NAME" >}} pod is
used.
The following arguments are supported:
@@ -242,11 +242,11 @@ Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
`enabled` | `bool` | Distribute log collection with other cluster nodes. | | yes
-When the agent is [using clustering][], and `enabled` is set to true, then this
+When {{< param "PRODUCT_NAME" >}} is [using clustering][], and `enabled` is set to true, then this
`loki.source.podlogs` component instance opts-in to participating in the
cluster to distribute the load of log collection between all cluster nodes.
-If the agent is _not_ running in clustered mode, then the block is a no-op and
+If {{< param "PRODUCT_NAME" >}} is _not_ running in clustered mode, then the block is a no-op and
`loki.source.podlogs` collects logs based on every PodLogs resource discovered.
[using clustering]: {{< relref "../../concepts/clustering.md" >}}
diff --git a/docs/sources/flow/reference/components/loki.write.md b/docs/sources/flow/reference/components/loki.write.md
index 4dd21097b720..3c561e7a29a4 100644
--- a/docs/sources/flow/reference/components/loki.write.md
+++ b/docs/sources/flow/reference/components/loki.write.md
@@ -155,7 +155,7 @@ following two mechanisms:
`min_read_frequency` and `max_read_frequency`.
The WAL is located inside a component-specific directory relative to the
-storage path Grafana Agent is configured to use. See the
+storage path {{< param "PRODUCT_NAME" >}} is configured to use. See the
[`agent run` documentation][run] for how to change the storage path.
The following arguments are supported:
diff --git a/docs/sources/flow/reference/components/mimir.rules.kubernetes.md b/docs/sources/flow/reference/components/mimir.rules.kubernetes.md
index 88bc56acc751..59ee56cc2f6a 100644
--- a/docs/sources/flow/reference/components/mimir.rules.kubernetes.md
+++ b/docs/sources/flow/reference/components/mimir.rules.kubernetes.md
@@ -47,18 +47,18 @@ mimir.rules.kubernetes "LABEL" {
`mimir.rules.kubernetes` supports the following arguments:
-Name | Type | Description | Default | Required
--------------------------|------------|----------------------------------------------------------|---------|---------
-`address` | `string` | URL of the Mimir ruler. | | yes
-`tenant_id` | `string` | Mimir tenant ID. | | no
-`use_legacy_routes` | `bool` | Whether to use deprecated ruler API endpoints. | false | no
-`sync_interval` | `duration` | Amount of time between reconciliations with Mimir. | "30s" | no
-`mimir_namespace_prefix` | `string` | Prefix used to differentiate multiple agent deployments. | "agent" | no
-`bearer_token` | `secret` | Bearer token to authenticate with. | | no
-`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no
-`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no
-`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no
-`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no
+Name | Type | Description | Default | Required
+-------------------------|------------|---------------------------------------------------------------------------------|---------|---------
+`address` | `string` | URL of the Mimir ruler. | | yes
+`tenant_id` | `string` | Mimir tenant ID. | | no
+`use_legacy_routes` | `bool` | Whether to use deprecated ruler API endpoints. | false | no
+`sync_interval` | `duration` | Amount of time between reconciliations with Mimir. | "30s" | no
+`mimir_namespace_prefix` | `string` | Prefix used to differentiate multiple {{< param "PRODUCT_NAME" >}} deployments. | "agent" | no
+`bearer_token` | `secret` | Bearer token to authenticate with. | | no
+`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no
+`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no
+`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no
+`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no
At most one of the following can be provided:
- [`bearer_token` argument](#arguments).
@@ -78,7 +78,7 @@ differently. Updates are processed as events from the Kubernetes API server
according to the informer pattern.
The `mimir_namespace_prefix` argument can be used to separate the rules managed
-by multiple agent deployments across your infrastructure. It should be set to a
+by multiple {{< param "PRODUCT_NAME" >}} deployments across your infrastructure. It should be set to a
unique value for each deployment.
## Blocks
diff --git a/docs/sources/flow/reference/components/module.file.md b/docs/sources/flow/reference/components/module.file.md
index 7e976cb5d861..bc7839074396 100644
--- a/docs/sources/flow/reference/components/module.file.md
+++ b/docs/sources/flow/reference/components/module.file.md
@@ -15,7 +15,7 @@ title: module.file
{{< docs/shared lookup="flow/stability/beta.md" source="agent" version="" >}}
-`module.file` is a *module loader* component. A module loader is a Grafana Agent Flow
+`module.file` is a *module loader* component. A module loader is a {{< param "PRODUCT_NAME" >}}
component which retrieves a [module][] and runs the components defined inside of it.
`module.file` simplifies the configurations for modules loaded from a file by embedding
diff --git a/docs/sources/flow/reference/components/module.git.md b/docs/sources/flow/reference/components/module.git.md
index 21e9ad885486..5085eead799c 100644
--- a/docs/sources/flow/reference/components/module.git.md
+++ b/docs/sources/flow/reference/components/module.git.md
@@ -15,7 +15,7 @@ title: module.git
{{< docs/shared lookup="flow/stability/beta.md" source="agent" version="" >}}
-`module.git` is a *module loader* component. A module loader is a Grafana Agent Flow
+`module.git` is a *module loader* component. A module loader is a {{< param "PRODUCT_NAME" >}}
component which retrieves a [module][] and runs the components defined inside of it.
`module.git` retrieves a module source from a file in a Git repository.
diff --git a/docs/sources/flow/reference/components/module.string.md b/docs/sources/flow/reference/components/module.string.md
index 497c320ceae8..aaaf688356f1 100644
--- a/docs/sources/flow/reference/components/module.string.md
+++ b/docs/sources/flow/reference/components/module.string.md
@@ -15,7 +15,7 @@ title: module.string
{{< docs/shared lookup="flow/stability/beta.md" source="agent" version="" >}}
-`module.string` is a *module loader* component. A module loader is a Grafana Agent Flow
+`module.string` is a *module loader* component. A module loader is a {{< param "PRODUCT_NAME" >}}
component which retrieves a [module][] and runs the components defined inside of it.
[module]: {{< relref "../../concepts/modules.md" >}}
diff --git a/docs/sources/flow/reference/components/otelcol.processor.discovery.md b/docs/sources/flow/reference/components/otelcol.processor.discovery.md
index cbeb805c862b..1c288eaf654f 100644
--- a/docs/sources/flow/reference/components/otelcol.processor.discovery.md
+++ b/docs/sources/flow/reference/components/otelcol.processor.discovery.md
@@ -12,9 +12,9 @@ title: otelcol.processor.discovery
# otelcol.processor.discovery
`otelcol.processor.discovery` accepts traces telemetry data from other `otelcol`
-components. It can be paired with `discovery.*` components, which supply a list
+components. It can be paired with `discovery.*` components, which supply a list
of labels for each discovered target.
-`otelcol.processor.discovery` adds resource attributes to spans which have a hostname
+`otelcol.processor.discovery` adds resource attributes to spans which have a hostname
matching the one in the `__address__` label provided by the `discovery.*` component.
{{% admonition type="note" %}}
@@ -26,22 +26,22 @@ Multiple `otelcol.processor.discovery` components can be specified by giving the
different labels.
{{% admonition type="note" %}}
-It can be difficult to follow [OpenTelemetry semantic conventions][OTEL sem conv] when
+It can be difficult to follow [OpenTelemetry semantic conventions][OTEL sem conv] when
adding resource attributes via `otelcol.processor.discovery`:
-* `discovery.relabel` and most `discovery.*` processes such as `discovery.kubernetes`
+* `discovery.relabel` and most `discovery.*` processes such as `discovery.kubernetes`
can only emit [Prometheus-compatible labels][Prometheus data model].
-* Prometheus labels use underscores (`_`) in labels names, whereas
+* Prometheus labels use underscores (`_`) in labels names, whereas
[OpenTelemetry semantic conventions][OTEL sem conv] use dots (`.`).
* Although `otelcol.processor.discovery` is able to work with non-Prometheus labels
- such as ones containing dots, the fact that `discovery.*` components are generally
- only compatible with Prometheus naming conventions makes it hard to follow OpenTelemetry
+ such as ones containing dots, the fact that `discovery.*` components are generally
+ only compatible with Prometheus naming conventions makes it hard to follow OpenTelemetry
semantic conventions in `otelcol.processor.discovery`.
-If your use case is to add resource attributes which contain Kubernetes metadata,
+If your use case is to add resource attributes which contain Kubernetes metadata,
consider using `otelcol.processor.k8sattributes` instead.
------
-The main use case for `otelcol.processor.discovery` is for users who migrate to Grafana Agent Flow mode
+The main use case for `otelcol.processor.discovery` is for users who migrate to {{< param "PRODUCT_NAME" >}}
from Static mode's `prom_sd_operation_type`/`prom_sd_pod_associations` [configuration options][Traces].
[Prometheus data model]: https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
diff --git a/docs/sources/flow/reference/components/otelcol.processor.k8sattributes.md b/docs/sources/flow/reference/components/otelcol.processor.k8sattributes.md
index 7b323ddaeeea..490b5a99e3cf 100644
--- a/docs/sources/flow/reference/components/otelcol.processor.k8sattributes.md
+++ b/docs/sources/flow/reference/components/otelcol.processor.k8sattributes.md
@@ -16,7 +16,7 @@ components and adds Kubernetes metadata to the resource attributes of spans, log
{{% admonition type="note" %}}
`otelcol.processor.k8sattributes` is a wrapper over the upstream OpenTelemetry
-Collector `k8sattributes` processor. If necessary, bug reports or feature requests
+Collector `k8sattributes` processor. If necessary, bug reports or feature requests
will be redirected to the upstream repository.
{{% /admonition %}}
@@ -54,12 +54,12 @@ Setting `passthrough` to `true` enables the "passthrough mode" of `otelcol.proce
* Only a `k8s.pod.ip` resource attribute will be added.
* No other metadata will be added.
* The Kubernetes API will not be accessed.
-* To correctly detect the pod IPs, the Agent must receive spans directly from services.
+* To correctly detect the pod IPs, {{< param "PRODUCT_ROOT_NAME" >}} must receive spans directly from services.
* The `passthrough` setting is useful when configuring the Agent as a Kubernetes Deployment.
-An Agent running as a Deployment cannot detect the IP addresses of pods generating telemetry
-data without any of the well-known IP attributes. If the Deployment Agent receives telemetry from
-Agents deployed as DaemonSet, then some of those attributes might be missing. As a workaround,
-you can configure the DaemonSet Agents with `passthrough` set to `true`.
+A {{< param "PRODUCT_ROOT_NAME" >}} running as a Deployment cannot detect the IP addresses of pods generating telemetry
+data without any of the well-known IP attributes. If the Deployment {{< param "PRODUCT_ROOT_NAME" >}} receives telemetry from
+{{< param "PRODUCT_ROOT_NAME" >}}s deployed as DaemonSet, then some of those attributes might be missing. As a workaround,
+you can configure the DaemonSet {{< param "PRODUCT_ROOT_NAME" >}}s with `passthrough` set to `true`.
## Blocks
diff --git a/docs/sources/flow/reference/components/otelcol.processor.tail_sampling.md b/docs/sources/flow/reference/components/otelcol.processor.tail_sampling.md
index 990c49b322b1..bdecd66b107b 100644
--- a/docs/sources/flow/reference/components/otelcol.processor.tail_sampling.md
+++ b/docs/sources/flow/reference/components/otelcol.processor.tail_sampling.md
@@ -333,7 +333,7 @@ information.
## Example
-This example batches trace data from Grafana Agent before sending it to
+This example batches trace data from {{< param "PRODUCT_NAME" >}} before sending it to
[otelcol.exporter.otlp][] for further processing. This example shows an impractical number of policies for the purpose of demonstrating how to set up each type.
```river
diff --git a/docs/sources/flow/reference/components/otelcol.receiver.prometheus.md b/docs/sources/flow/reference/components/otelcol.receiver.prometheus.md
index 25bf3e9f4497..a29c018154f3 100644
--- a/docs/sources/flow/reference/components/otelcol.receiver.prometheus.md
+++ b/docs/sources/flow/reference/components/otelcol.receiver.prometheus.md
@@ -81,7 +81,7 @@ endpoint:
```river
prometheus.scrape "default" {
- // Collect metrics from Grafana Agent's default HTTP listen address.
+ // Collect metrics from the default HTTP listen address.
targets = [{"__address__" = "127.0.0.1:12345"}]
forward_to = [otelcol.receiver.prometheus.default.receiver]
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.azure.md b/docs/sources/flow/reference/components/prometheus.exporter.azure.md
index 7abc09666344..594b9b96f34a 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.azure.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.azure.md
@@ -19,9 +19,9 @@ the Egress metric for BlobService would be exported as `azure_microsoft_storage_
## Authentication
-Grafana agent must be running in an environment with access to Azure. The exporter uses the Azure SDK for go and supports [authentication](https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication?tabs=bash#2-authenticate-with-azure).
+{{< param "PRODUCT_NAME" >}} must be running in an environment with access to Azure. The exporter uses the Azure SDK for go and supports [authentication](https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication?tabs=bash#2-authenticate-with-azure).
-The account used by Grafana Agent needs:
+The account used by {{< param "PRODUCT_NAME" >}} needs:
- [Read access to the resources that will be queried by Resource Graph](https://learn.microsoft.com/en-us/azure/governance/resource-graph/overview#permissions-in-azure-resource-graph)
- Permissions to call the [Microsoft.Insights Metrics API](https://learn.microsoft.com/en-us/rest/api/monitor/metrics/list) which should be the `Microsoft.Insights/Metrics/Read` permission
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.cloudwatch.md b/docs/sources/flow/reference/components/prometheus.exporter.cloudwatch.md
index 6fc15b290106..9d6c7a46256a 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.cloudwatch.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.cloudwatch.md
@@ -24,7 +24,7 @@ two kinds of jobs: [discovery][] and [static][].
## Authentication
-The agent must be running in an environment with access to AWS. The exporter uses
+{{< param "PRODUCT_NAME" >}} must be running in an environment with access to AWS. The exporter uses
the [AWS SDK for Go](https://aws.github.io/aws-sdk-go-v2/docs/getting-started/) and
provides authentication
via [AWS's default credential chain](https://aws.github.io/aws-sdk-go-v2/docs/configuring-sdk/#specifying-credentials).
@@ -137,19 +137,18 @@ Omitted fields take their default values.
You can use the following blocks in`prometheus.exporter.cloudwatch` to configure collector-specific options:
-| Hierarchy | Name | Description | Required |
-| ------------------ | ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------- | -------- |
-| discovery | [discovery][] | Configures a discovery job. Multiple jobs can be configured. | no\* |
-| discovery > role | [role][] | Configures the IAM roles the job should assume to scrape metrics. Defaults to the role configured in the environment the agent runs on. | no |
-| discovery > metric | [metric][] | Configures the list of metrics the job should scrape. Multiple metrics can be defined inside one job. | yes |
-| static | [static][] | Configures a static job. Multiple jobs can be configured. | no\* |
-| static > role | [role][] | Configures the IAM roles the job should assume to scrape metrics. Defaults to the role configured in the environment the agent runs on. | no |
-| static > metric | [metric][] | Configures the list of metrics the job should scrape. Multiple metrics can be defined inside one job. | yes |
-| decoupled_scraping | [decoupled_scraping][] | Configures the decoupled scraping feature to retrieve metrics on a schedule and return the cached metrics. | no |
+| Hierarchy | Name | Description | Required |
+|--------------------|------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|
+| discovery | [discovery][] | Configures a discovery job. Multiple jobs can be configured. | no\* |
+| discovery > role | [role][] | Configures the IAM roles the job should assume to scrape metrics. Defaults to the role configured in the environment {{< param "PRODUCT_NAME" >}} runs on. | no |
+| discovery > metric | [metric][] | Configures the list of metrics the job should scrape. Multiple metrics can be defined inside one job. | yes |
+| static | [static][] | Configures a static job. Multiple jobs can be configured. | no\* |
+| static > role | [role][] | Configures the IAM roles the job should assume to scrape metrics. Defaults to the role configured in the environment {{< param "PRODUCT_NAME" >}} runs on. | no |
+| static > metric | [metric][] | Configures the list of metrics the job should scrape. Multiple metrics can be defined inside one job. | yes |
+| decoupled_scraping | [decoupled_scraping][] | Configures the decoupled scraping feature to retrieve metrics on a schedule and return the cached metrics. | no |
{{% admonition type="note" %}}
-The `static` and `discovery` blocks are marked as not required, but you must configure at least one static or discovery
-job.
+The `static` and `discovery` blocks are marked as not required, but you must configure at least one static or discovery job.
{{% /admonition %}}
[discovery]: #discovery-block
@@ -162,10 +161,8 @@ job.
The `discovery` block allows the component to scrape CloudWatch metrics with only the AWS service and a list of metrics
under that service/namespace.
-The agent will find AWS resources in the specified service for which to scrape these metrics, label them appropriately,
-and
-export them to Prometheus. For example, if we wanted to scrape CPU utilization and network traffic metrics from all AWS
-EC2 instances:
+{{< param "PRODUCT_NAME" >}} will find AWS resources in the specified service for which to scrape these metrics, label them appropriately,
+and export them to Prometheus. For example, if we wanted to scrape CPU utilization and network traffic metrics from all AWS EC2 instances:
```river
prometheus.exporter.cloudwatch "discover_instances" {
@@ -281,11 +278,9 @@ on how to explore metrics, to easily pick the ones you need.
#### period and length
-`period` controls primarily the width of the time bucket used for aggregating metrics collected from
-CloudWatch. `length`
-controls how far back in time CloudWatch metrics are considered during each agent scrape. If both settings are
-configured,
-the time parameters when calling CloudWatch APIs works as follows:
+`period` controls primarily the width of the time bucket used for aggregating metrics collected from CloudWatch. `length`
+controls how far back in time CloudWatch metrics are considered during each {{< param "PRODUCT_ROOT_NAME" >}} scrape.
+If both settings are configured, the time parameters when calling CloudWatch APIs works as follows:
![](https://grafana.com/media/docs/agent/cloudwatch-period-and-length-time-model-2.png)
@@ -318,7 +313,7 @@ that corresponds to the credentials configured in the environment will be used.
Multiple roles can be useful when scraping metrics from different AWS accounts with a single pair of credentials. In
this case, a different role
-is configured for the agent to assume before calling AWS APIs. Therefore, the credentials configured in the system need
+is configured for {{< param "PRODUCT_ROOT_NAME" >}} to assume before calling AWS APIs. Therefore, the credentials configured in the system need
permission to assume the target role.
See [Granting a user permissions to switch roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_permissions-to-switch.html)
in the AWS IAM documentation for more information about how to configure this.
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.consul.md b/docs/sources/flow/reference/components/prometheus.exporter.consul.md
index f8344b3a1b69..667d30241e39 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.consul.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.consul.md
@@ -28,7 +28,7 @@ All arguments are optional. Omitted fields take their default values.
| Name | Type | Description | Default | Required |
| -------------------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------- | -------- |
-| `server` | `string` | Address (host and port) of the Consul instance we should connect to. This could be a local agent (localhost:8500, for instance), or the address of a Consul server. | `http://localhost:8500` | no |
+| `server` | `string` | Address (host and port) of the Consul instance we should connect to. This could be a local {{< param "PRODUCT_ROOT_NAME" >}} (localhost:8500, for instance), or the address of a Consul server. | `http://localhost:8500` | no |
| `ca_file` | `string` | File path to a PEM-encoded certificate authority used to validate the authenticity of a server certificate. | | no |
| `cert_file` | `string` | File path to a PEM-encoded certificate used with the private key to verify the exporter's authenticity. | | no |
| `key_file` | `string` | File path to a PEM-encoded private key used with the certificate to verify the exporter's authenticity. | | no |
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.gcp.md b/docs/sources/flow/reference/components/prometheus.exporter.gcp.md
index 1d76b646f518..18219a02434a 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.gcp.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.gcp.md
@@ -12,7 +12,8 @@ title: prometheus.exporter.gcp
# prometheus.exporter.gcp
The `prometheus.exporter.gcp` component embeds [`stackdriver_exporter`](https://github.com/prometheus-community/stackdriver_exporter).
-It lets you collect [GCP Cloud Monitoring (formerly stackdriver)](https://cloud.google.com/monitoring/docs), translate them to prometheus-compatible format and remote write. The component supports all metrics available via [GCP's monitoring API](https://cloud.google.com/monitoring/api/metrics_gcp).
+It lets you collect [GCP Cloud Monitoring (formerly stackdriver)](https://cloud.google.com/monitoring/docs), translate them to prometheus-compatible format and remote write.
+The component supports all metrics available via [GCP's monitoring API](https://cloud.google.com/monitoring/api/metrics_gcp).
Metric names follow the template `stackdriver___`.
@@ -30,10 +31,10 @@ These attributes result in a final metric name of:
## Authentication
-Grafana Agent must be running in an environment with access to the GCP project it is scraping. The exporter
+{{< param "PRODUCT_ROOT_NAME" >}} must be running in an environment with access to the GCP project it is scraping. The exporter
uses the Google Golang Client Library, which offers a variety of ways to [provide credentials](https://developers.google.com/identity/protocols/application-default-credentials). Choose the option that works best for you.
-After deciding how Agent will obtain credentials, ensure the account is set up with the IAM role `roles/monitoring.viewer`.
+After deciding how {{< param "PRODUCT_ROOT_NAME" >}} will obtain credentials, ensure the account is set up with the IAM role `roles/monitoring.viewer`.
Since the exporter gathers all of its data from [GCP monitoring APIs](https://cloud.google.com/monitoring/api/v3), this is the only permission needed.
## Usage
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.mongodb.md b/docs/sources/flow/reference/components/prometheus.exporter.mongodb.md
index bd3b03ed04f4..a1c103d3907b 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.mongodb.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.mongodb.md
@@ -14,11 +14,10 @@ title: prometheus.exporter.mongodb
The `prometheus.exporter.mongodb` component embeds percona's [`mongodb_exporter`](https://github.com/percona/mongodb_exporter).
{{% admonition type="note" %}}
-For this integration to work properly, you must have connect each node of your MongoDB cluster to an agent instance.
-That's because this exporter does not collect metrics from multiple nodes.
+This exporter doesn't collect metrics from multiple nodes. For this integration to work properly, you must have connect each node of your MongoDB cluster to a {{< param "PRODUCT_NAME" >}} instance.
{{% /admonition %}}
-We strongly recommend configuring a separate user for the Grafana Agent, giving it only the strictly mandatory security privileges necessary for monitoring your node.
+We strongly recommend configuring a separate user for {{< param "PRODUCT_NAME" >}}, giving it only the strictly mandatory security privileges necessary for monitoring your node.
Refer to the [Percona documentation](https://github.com/percona/mongodb_exporter#permissions) for more information.
## Usage
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.windows.md b/docs/sources/flow/reference/components/prometheus.exporter.windows.md
index f0fab521a83e..b874fa19b0fe 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.windows.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.windows.md
@@ -47,11 +47,11 @@ The following blocks are supported inside the definition of
`prometheus.exporter.windows` to configure collector-specific options:
Hierarchy | Name | Description | Required
----------------|--------------------|------------------------------------------|----------
-dfsr | [dfsr][] | Configures the dfsr collector. | no
+---------------|--------------------|------------------------------------------|---------
+dfsr | [dfsr][] | Configures the dfsr collector. | no
exchange | [exchange][] | Configures the exchange collector. | no
iis | [iis][] | Configures the iis collector. | no
-logical_disk | [logical_disk][] | Configures the logical_disk collector. | no
+logical_disk | [logical_disk][] | Configures the logical_disk collector. | no
msmq | [msmq][] | Configures the msmq collector. | no
mssql | [mssql][] | Configures the mssql collector. | no
network | [network][] | Configures the network collector. | no
@@ -272,7 +272,7 @@ Name | Description | Enabled by default
See the linked documentation on each collector for more information on reported metrics, configuration settings and usage examples.
{{% admonition type="caution" %}}
-Certain collectors will cause Grafana Agent to crash if those collectors are used and the required infrastructure is not installed.
+Certain collectors will cause {{< param "PRODUCT_ROOT_NAME" >}} to crash if those collectors are used and the required infrastructure is not installed.
These include but are not limited to mscluster_*, vmware, nps, dns, msmq, teradici_pcoip, ad, hyperv, and scheduled_task.
{{% /admonition %}}
diff --git a/docs/sources/flow/reference/components/prometheus.operator.podmonitors.md b/docs/sources/flow/reference/components/prometheus.operator.podmonitors.md
index 2bdf486982fd..7a31f3aab668 100644
--- a/docs/sources/flow/reference/components/prometheus.operator.podmonitors.md
+++ b/docs/sources/flow/reference/components/prometheus.operator.podmonitors.md
@@ -21,7 +21,7 @@ title: prometheus.operator.podmonitors
2. Discover Pods in your cluster that match those PodMonitors.
3. Scrape metrics from those Pods, and forward them to a receiver.
-The default configuration assumes the agent is running inside a Kubernetes cluster, and uses the in-cluster config to access the Kubernetes API. It can be run from outside the cluster by supplying connection info in the `client` block, but network level access to pods is required to scrape metrics from them.
+The default configuration assumes {{< param "PRODUCT_NAME" >}} is running inside a Kubernetes cluster, and uses the in-cluster configuration to access the Kubernetes API. It can be run from outside the cluster by supplying connection info in the `client` block, but network level access to pods is required to scrape metrics from them.
PodMonitors may reference secrets for authenticating to targets to scrape them. In these cases, the secrets are loaded and refreshed only when the PodMonitor is updated or when this component refreshes its' internal state, which happens on a 5-minute refresh cycle.
@@ -58,7 +58,7 @@ rule | [rule][] | Relabeling rules to apply to discovered targets. | no
scrape | [scrape][] | Default scrape configuration to apply to discovered targets. | no
selector | [selector][] | Label selector for which PodMonitors to discover. | no
selector > match_expression | [match_expression][] | Label selector expression for which PodMonitors to discover. | no
-clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no
+clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_ROOT_NAME" >}} is running in clustered mode. | no
The `>` symbol indicates deeper levels of nesting. For example, `client >
basic_auth` refers to a `basic_auth` block defined
@@ -78,7 +78,7 @@ inside a `client` block.
### client block
The `client` block configures the Kubernetes client used to discover PodMonitors. If the `client` block isn't provided, the default in-cluster
-configuration with the service account of the running Grafana Agent pod is
+configuration with the service account of the running {{< param "PRODUCT_ROOT_NAME" >}} pod is
used.
The following arguments are supported:
@@ -163,7 +163,7 @@ Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
`enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes
-When the agent is [using clustering][], and `enabled` is set to true,
+When {{< param "PRODUCT_ROOT_NAME" >}} is [using clustering][], and `enabled` is set to true,
then this component instance opts-in to participating in
the cluster to distribute scrape load between all cluster nodes.
@@ -182,7 +182,7 @@ sharding where _all_ nodes have to be re-distributed, as only 1/N of the
target's ownership is transferred, but is eventually consistent (rather than
fully consistent like hashmod sharding is).
-If the agent is _not_ running in clustered mode, then the block is a no-op, and
+If {{< param "PRODUCT_ROOT_NAME" >}} is _not_ running in clustered mode, then the block is a no-op, and
`prometheus.operator.podmonitors` scrapes every target it receives in its arguments.
[using clustering]: {{< relref "../../concepts/clustering.md" >}}
@@ -244,7 +244,7 @@ prometheus.operator.podmonitors "pods" {
}
```
-This example will apply additional relabel rules to discovered targets to filter by hostname. This may be useful if running the agent as a DaemonSet.
+This example will apply additional relabel rules to discovered targets to filter by hostname. This may be useful if running {{< param "PRODUCT_ROOT_NAME" >}} as a DaemonSet.
```river
prometheus.operator.podmonitors "pods" {
diff --git a/docs/sources/flow/reference/components/prometheus.operator.probes.md b/docs/sources/flow/reference/components/prometheus.operator.probes.md
index 693ae045d0f7..b77258a10fb7 100644
--- a/docs/sources/flow/reference/components/prometheus.operator.probes.md
+++ b/docs/sources/flow/reference/components/prometheus.operator.probes.md
@@ -15,15 +15,18 @@ title: prometheus.operator.probes
{{< docs/shared lookup="flow/stability/beta.md" source="agent" version="" >}}
-`prometheus.operator.probes` discovers [Probe](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.Probe) resources in your Kubernetes cluster and scrapes the targets they reference. This component performs three main functions:
+`prometheus.operator.probes` discovers [Probe](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.Probe) resources in your Kubernetes cluster and scrapes the targets they reference.
+ This component performs three main functions:
1. Discover Probe resources from your Kubernetes cluster.
-2. Discover targets or ingresses that match those Probes.
-3. Scrape metrics from those endpoints, and forward them to a receiver.
+1. Discover targets or ingresses that match those Probes.
+1. Scrape metrics from those endpoints, and forward them to a receiver.
-The default configuration assumes the agent is running inside a Kubernetes cluster, and uses the in-cluster config to access the Kubernetes API. It can be run from outside the cluster by supplying connection info in the `client` block, but network level access to pods is required to scrape metrics from them.
+The default configuration assumes {{< param "PRODUCT_NAME" >}} is running inside a Kubernetes cluster, and uses the in-cluster config to access the Kubernetes API.
+It can be run from outside the cluster by supplying connection info in the `client` block, but network level access to pods is required to scrape metrics from them.
-Probes may reference secrets for authenticating to targets to scrape them. In these cases, the secrets are loaded and refreshed only when the Probe is updated or when this component refreshes its' internal state, which happens on a 5-minute refresh cycle.
+Probes may reference secrets for authenticating to targets to scrape them.
+In these cases, the secrets are loaded and refreshed only when the Probe is updated or when this component refreshes its' internal state, which happens on a 5-minute refresh cycle.
## Usage
@@ -78,8 +81,7 @@ inside a `client` block.
### client block
The `client` block configures the Kubernetes client used to discover Probes. If the `client` block isn't provided, the default in-cluster
-configuration with the service account of the running Grafana Agent pod is
-used.
+configuration with the service account of the running {{< param "PRODUCT_ROOT_NAME" >}} pod is used.
The following arguments are supported:
@@ -163,7 +165,7 @@ Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
`enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes
-When the agent is running in [clustered mode][], and `enabled` is set to true,
+When {{< param "PRODUCT_NAME" >}} is running in [clustered mode][], and `enabled` is set to true,
then this component instance opts-in to participating in
the cluster to distribute scrape load between all cluster nodes.
@@ -182,7 +184,7 @@ sharding where _all_ nodes have to be re-distributed, as only 1/N of the
target's ownership is transferred, but is eventually consistent (rather than
fully consistent like hashmod sharding is).
-If the agent is _not_ running in clustered mode, then the block is a no-op, and
+If {{< param "PRODUCT_NAME" >}} is _not_ running in clustered mode, then the block is a no-op, and
`prometheus.operator.probes` scrapes every target it receives in its arguments.
[clustered mode]: {{< relref "../cli/run.md#clustering-beta" >}}
@@ -244,7 +246,7 @@ prometheus.operator.probes "pods" {
}
```
-This example will apply additional relabel rules to discovered targets to filter by hostname. This may be useful if running the agent as a DaemonSet.
+This example will apply additional relabel rules to discovered targets to filter by hostname. This may be useful if running {{< param "PRODUCT_NAME" >}} as a DaemonSet.
```river
prometheus.operator.probes "probes" {
diff --git a/docs/sources/flow/reference/components/prometheus.operator.servicemonitors.md b/docs/sources/flow/reference/components/prometheus.operator.servicemonitors.md
index 362abb38d90f..6e2f9cf5ebaa 100644
--- a/docs/sources/flow/reference/components/prometheus.operator.servicemonitors.md
+++ b/docs/sources/flow/reference/components/prometheus.operator.servicemonitors.md
@@ -18,12 +18,14 @@ title: prometheus.operator.servicemonitors
`prometheus.operator.servicemonitors` discovers [ServiceMonitor](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.ServiceMonitor) resources in your kubernetes cluster and scrapes the targets they reference. This component performs three main functions:
1. Discover ServiceMonitor resources from your Kubernetes cluster.
-2. Discover Services and Endpoints in your cluster that match those ServiceMonitors.
-3. Scrape metrics from those Endpoints, and forward them to a receiver.
+1. Discover Services and Endpoints in your cluster that match those ServiceMonitors.
+1. Scrape metrics from those Endpoints, and forward them to a receiver.
-The default configuration assumes the agent is running inside a Kubernetes cluster, and uses the in-cluster config to access the Kubernetes API. It can be run from outside the cluster by supplying connection info in the `client` block, but network level access to discovered endpoints is required to scrape metrics from them.
+The default configuration assumes {{< param "PRODUCT_NAME" >}} is running inside a Kubernetes cluster, and uses the in-cluster configuration to access the Kubernetes API.
+It can be run from outside the cluster by supplying connection info in the `client` block, but network level access to discovered endpoints is required to scrape metrics from them.
-ServiceMonitors may reference secrets for authenticating to targets to scrape them. In these cases, the secrets are loaded and refreshed only when the ServiceMonitor is updated or when this component refreshes its' internal state, which happens on a 5-minute refresh cycle.
+ServiceMonitors may reference secrets for authenticating to targets to scrape them.
+In these cases, the secrets are loaded and refreshed only when the ServiceMonitor is updated or when this component refreshes its' internal state, which happens on a 5-minute refresh cycle.
## Usage
@@ -58,7 +60,7 @@ rule | [rule][] | Relabeling rules to apply to discovered targets. | no
scrape | [scrape][] | Default scrape configuration to apply to discovered targets. | no
selector | [selector][] | Label selector for which ServiceMonitors to discover. | no
selector > match_expression | [match_expression][] | Label selector expression for which ServiceMonitors to discover. | no
-clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no
+clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_NAME" >}} is running in clustered mode. | no
The `>` symbol indicates deeper levels of nesting. For example, `client >
basic_auth` refers to a `basic_auth` block defined
@@ -77,9 +79,8 @@ inside a `client` block.
### client block
-The `client` block configures the Kubernetes client used to discover ServiceMonitors. If the `client` block isn't provided, the default in-cluster
-configuration with the service account of the running Grafana Agent pod is
-used.
+The `client` block configures the Kubernetes client used to discover ServiceMonitors.
+If the `client` block isn't provided, the default in-cluster configuration with the service account of the running {{< param "PRODUCT_ROOT_NAME" >}} pod is used.
The following arguments are supported:
@@ -163,7 +164,7 @@ Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
`enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes
-When the agent is using [using clustering][], and `enabled` is set to true,
+When {{< param "PRODUCT_NAME" >}} is using [using clustering][], and `enabled` is set to true,
then this component instance opts-in to participating in
the cluster to distribute scrape load between all cluster nodes.
@@ -182,7 +183,7 @@ sharding where _all_ nodes have to be re-distributed, as only 1/N of the
target's ownership is transferred, but is eventually consistent (rather than
fully consistent like hashmod sharding is).
-If the agent is _not_ running in clustered mode, then the block is a no-op, and
+If {{< param "PRODUCT_NAME" >}} is _not_ running in clustered mode, then the block is a no-op, and
`prometheus.operator.servicemonitors` scrapes every target it receives in its arguments.
[using clustering]: {{< relref "../../concepts/clustering.md" >}}
@@ -245,7 +246,7 @@ prometheus.operator.servicemonitors "services" {
}
```
-This example will apply additional relabel rules to discovered targets to filter by hostname. This may be useful if running the agent as a DaemonSet.
+This example will apply additional relabel rules to discovered targets to filter by hostname. This may be useful if running {{< param "PRODUCT_NAME" >}} as a DaemonSet.
```river
prometheus.operator.servicemonitors "services" {
diff --git a/docs/sources/flow/reference/components/prometheus.receive_http.md b/docs/sources/flow/reference/components/prometheus.receive_http.md
index 54583a453ed5..321196c497fe 100644
--- a/docs/sources/flow/reference/components/prometheus.receive_http.md
+++ b/docs/sources/flow/reference/components/prometheus.receive_http.md
@@ -13,7 +13,7 @@ title: prometheus.receive_http
`prometheus.receive_http` listens for HTTP requests containing Prometheus metric samples and forwards them to other components capable of receiving metrics.
-The HTTP API exposed is compatible with [Prometheus `remote_write` API][prometheus-remote-write-docs]. This means that other [`prometheus.remote_write`][prometheus.remote_write] components can be used as a client and send requests to `prometheus.receive_http` which enables using the Agent as a proxy for prometheus metrics.
+The HTTP API exposed is compatible with [Prometheus `remote_write` API][prometheus-remote-write-docs]. This means that other [`prometheus.remote_write`][prometheus.remote_write] components can be used as a client and send requests to `prometheus.receive_http` which enables using {{< param "PRODUCT_ROOT_NAME" >}} as a proxy for prometheus metrics.
[prometheus.remote_write]: {{< relref "./prometheus.remote_write.md" >}}
[prometheus-remote-write-docs]: https://prometheus.io/docs/prometheus/2.45/querying/api/#remote-write-receiver
@@ -24,7 +24,7 @@ The HTTP API exposed is compatible with [Prometheus `remote_write` API][promethe
prometheus.receive_http "LABEL" {
http {
listen_address = "LISTEN_ADDRESS"
- listen_port = PORT
+ listen_port = PORT
}
forward_to = RECEIVER_LIST
}
@@ -32,23 +32,23 @@ prometheus.receive_http "LABEL" {
The component will start an HTTP server supporting the following endpoint:
-- `POST /api/v1/metrics/write` - send metrics to the component, which in turn will be forwarded to the receivers as configured in `forward_to` argument. The request format must match that of [Prometheus `remote_write` API][prometheus-remote-write-docs]. One way to send valid requests to this component is to use another Grafana Agent with a [`prometheus.remote_write`][prometheus.remote_write] component.
+- `POST /api/v1/metrics/write` - send metrics to the component, which in turn will be forwarded to the receivers as configured in `forward_to` argument. The request format must match that of [Prometheus `remote_write` API][prometheus-remote-write-docs]. One way to send valid requests to this component is to use another {{< param "PRODUCT_ROOT_NAME" >}} with a [`prometheus.remote_write`][prometheus.remote_write] component.
## Arguments
`prometheus.receive_http` supports the following arguments:
- Name | Type | Description | Default | Required
---------------|------------------|---------------------------------------|---------|----------
- `forward_to` | `list(receiver)` | List of receivers to send metrics to. | | yes
+Name | Type | Description | Default | Required
+-------------|------------------|---------------------------------------|---------|---------
+`forward_to` | `list(receiver)` | List of receivers to send metrics to. | | yes
## Blocks
The following blocks are supported inside the definition of `prometheus.receive_http`:
- Hierarchy | Name | Description | Required
------------|----------|----------------------------------------------------|----------
- `http` | [http][] | Configures the HTTP server that receives requests. | no
+Hierarchy | Name | Description | Required
+----------|----------|----------------------------------------------------|---------
+`http` | [http][] | Configures the HTTP server that receives requests. | no
[http]: #http
@@ -106,7 +106,7 @@ prometheus.remote_write "local" {
### Proxying metrics
-In order to send metrics to the `prometheus.receive_http` component defined in the previous example, another Grafana Agent can run with the following configuration:
+In order to send metrics to the `prometheus.receive_http` component defined in the previous example, another {{< param "PRODUCT_ROOT_NAME" >}} can run with the following configuration:
```river
// Collects metrics of localhost:12345
@@ -117,12 +117,12 @@ prometheus.scrape "agent_self" {
forward_to = [prometheus.remote_write.local.receiver]
}
-// Writes metrics to localhost:9999/api/v1/metrics/write - e.g. served by
+// Writes metrics to localhost:9999/api/v1/metrics/write - e.g. served by
// the prometheus.receive_http component from the example above.
prometheus.remote_write "local" {
endpoint {
url = "http://localhost:9999/api/v1/metrics/write"
- }
+ }
}
```
diff --git a/docs/sources/flow/reference/components/prometheus.remote_write.md b/docs/sources/flow/reference/components/prometheus.remote_write.md
index 64b3efd3bc26..7dc914859afa 100644
--- a/docs/sources/flow/reference/components/prometheus.remote_write.md
+++ b/docs/sources/flow/reference/components/prometheus.remote_write.md
@@ -220,7 +220,7 @@ The WAL serves two primary purposes:
* Populate in-memory cache after a process restart.
The WAL is located inside a component-specific directory relative to the
-storage path Grafana Agent is configured to use. See the
+storage path {{< param "PRODUCT_NAME" >}} is configured to use. See the
[`agent run` documentation][run] for how to change the storage path.
The `truncate_frequency` argument configures how often to clean up the WAL.
@@ -355,7 +355,7 @@ prometheus.remote_write "staging" {
// prometheus.remote_write component.
prometheus.scrape "demo" {
targets = [
- // Collect metrics from Grafana Agent's default HTTP listen address.
+ // Collect metrics from the default HTTP listen address.
{"__address__" = "127.0.0.1:12345"},
]
forward_to = [prometheus.remote_write.staging.receiver]
diff --git a/docs/sources/flow/reference/components/prometheus.scrape.md b/docs/sources/flow/reference/components/prometheus.scrape.md
index d51bfa30f963..eee17b3afac1 100644
--- a/docs/sources/flow/reference/components/prometheus.scrape.md
+++ b/docs/sources/flow/reference/components/prometheus.scrape.md
@@ -53,8 +53,8 @@ Name | Type | Description | Default | Required
`honor_timestamps` | `bool` | Indicator whether the scraped timestamps should be respected. | `true` | no
`params` | `map(list(string))` | A set of query parameters with which the target is scraped. | | no
`scrape_classic_histograms` | `bool` | Whether to scrape a classic histogram that is also exposed as a native histogram. | `false` | no
-`scrape_interval` | `duration` | How frequently to scrape the targets of this scrape config. | `"60s"` | no
-`scrape_timeout` | `duration` | The timeout for scraping targets of this config. | `"10s"` | no
+`scrape_interval` | `duration` | How frequently to scrape the targets of this scrape configuration. | `"60s"` | no
+`scrape_timeout` | `duration` | The timeout for scraping targets of this configuration. | `"10s"` | no
`metrics_path` | `string` | The HTTP resource path on which to fetch metrics from targets. | `/metrics` | no
`scheme` | `string` | The URL scheme with which to fetch metrics from targets. | | no
`body_size_limit` | `int` | An uncompressed response body larger than this many bytes causes the scrape to fail. 0 means no limit. | | no
@@ -122,7 +122,7 @@ Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
`enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes
-When the agent is [using clustering][], and `enabled` is set to true,
+When {{< param "PRODUCT_NAME" >}} is [using clustering][], and `enabled` is set to true,
then this `prometheus.scrape` component instance opts-in to participating in
the cluster to distribute scrape load between all cluster nodes.
@@ -142,7 +142,7 @@ sharding where _all_ nodes have to be re-distributed, as only 1/N of the
targets ownership is transferred, but is eventually consistent (rather than
fully consistent like hashmod sharding is).
-If the agent is _not_ running in clustered mode, then the block is a no-op and
+If {{< param "PRODUCT_NAME" >}} is _not_ running in clustered mode, then the block is a no-op and
`prometheus.scrape` scrapes every target it receives in its arguments.
[using clustering]: {{< relref "../../concepts/clustering.md" >}}
diff --git a/docs/sources/flow/reference/components/pyroscope.ebpf.md b/docs/sources/flow/reference/components/pyroscope.ebpf.md
index cb3d436cecf7..b732399b54da 100644
--- a/docs/sources/flow/reference/components/pyroscope.ebpf.md
+++ b/docs/sources/flow/reference/components/pyroscope.ebpf.md
@@ -19,7 +19,7 @@ title: pyroscope.ebpf
to the list of receivers passed in `forward_to`.
{{% admonition type="note" %}}
-To use the `pyroscope.ebpf` component you must run Grafana Agent as root and inside host pid namespace.
+To use the `pyroscope.ebpf` component you must run {{< param "PRODUCT_NAME" >}} as root and inside host pid namespace.
{{% /admonition %}}
You can specify multiple `pyroscope.ebpf` components by giving them different labels, however it is not recommended as
@@ -45,15 +45,15 @@ values.
| Name | Type | Description | Default | Required |
|---------------------------|--------------------------|-------------------------------------------------------------------------------------|---------|----------|
| `targets` | `list(map(string))` | List of targets to group profiles by container id | | yes |
-| `forward_to` | `list(ProfilesReceiver)` | List of receivers to send collected profiles to. | | yes |
-| `collect_interval` | `duration` | How frequently to collect profiles | `15s` | no |
-| `sample_rate` | `int` | How many times per second to collect profile samples | 97 | no |
-| `pid_cache_size` | `int` | The size of the pid -> proc symbols table LRU cache | 32 | no |
-| `build_id_cache_size` | `int` | The size of the elf file build id -> symbols table LRU cache | 64 | no |
-| `same_file_cache_size` | `int` | The size of the elf file -> symbols table LRU cache | 8 | no |
-| `container_id_cache_size` | `int` | The size of the pid -> container ID table LRU cache | 1024 | no |
-| `collect_user_profile` | `bool` | A flag to enable/disable collection of userspace profiles | true | no |
-| `collect_kernel_profile` | `bool` | A flag to enable/disable collection of kernelspace profiles | true | no |
+| `forward_to` | `list(ProfilesReceiver)` | List of receivers to send collected profiles to. | | yes |
+| `collect_interval` | `duration` | How frequently to collect profiles | `15s` | no |
+| `sample_rate` | `int` | How many times per second to collect profile samples | 97 | no |
+| `pid_cache_size` | `int` | The size of the pid -> proc symbols table LRU cache | 32 | no |
+| `build_id_cache_size` | `int` | The size of the elf file build id -> symbols table LRU cache | 64 | no |
+| `same_file_cache_size` | `int` | The size of the elf file -> symbols table LRU cache | 8 | no |
+| `container_id_cache_size` | `int` | The size of the pid -> container ID table LRU cache | 1024 | no |
+| `collect_user_profile` | `bool` | A flag to enable/disable collection of userspace profiles | true | no |
+| `collect_kernel_profile` | `bool` | A flag to enable/disable collection of kernelspace profiles | true | no |
| `demangle` | `string` | C++ demangle mode. Available options are: `none`, `simplified`, `templates`, `full` | `none` | no |
| `python_enabled` | `bool` | A flag to enable/disable python profiling | true | no |
@@ -192,9 +192,9 @@ Interpreted methods will display the interpreter function’s name rather than t
### Kubernetes discovery
In the following example, performance profiles are collected from pods on the same node, discovered using
-`discovery.kubernetes`. Pod selection relies on the `HOSTNAME` environment variable, which is a pod name if the agent is
-used as a Grafana agent helm chart. The `service_name` label is set
-to `{__meta_kubernetes_namespace}/{__meta_kubernetes_pod_container_name}` from kubernetes meta labels.
+`discovery.kubernetes`. Pod selection relies on the `HOSTNAME` environment variable, which is a pod name if {{< param "PRODUCT_ROOT_NAME" >}} is
+used as a {{< param "PRODUCT_ROOT_NAME" >}} Helm chart. The `service_name` label is set
+to `{__meta_kubernetes_namespace}/{__meta_kubernetes_pod_container_name}` from Kubernetes meta labels.
```river
discovery.kubernetes "all_pods" {
diff --git a/docs/sources/flow/reference/components/pyroscope.scrape.md b/docs/sources/flow/reference/components/pyroscope.scrape.md
index 1d7e514b6732..fcfec95f3362 100644
--- a/docs/sources/flow/reference/components/pyroscope.scrape.md
+++ b/docs/sources/flow/reference/components/pyroscope.scrape.md
@@ -54,8 +54,8 @@ Name | Type | Description | Default | Required
`forward_to` | `list(ProfilesReceiver)` | List of receivers to send scraped profiles to. | | yes
`job_name` | `string` | The job name to override the job label with. | component name | no
`params` | `map(list(string))` | A set of query parameters with which the target is scraped. | | no
-`scrape_interval` | `duration` | How frequently to scrape the targets of this scrape config. | `"15s"` | no
-`scrape_timeout` | `duration` | The timeout for scraping targets of this config. | `"15s"` | no
+`scrape_interval` | `duration` | How frequently to scrape the targets of this scrape configuration. | `"15s"` | no
+`scrape_timeout` | `duration` | The timeout for scraping targets of this configuration. | `"15s"` | no
`scheme` | `string` | The URL scheme with which to fetch metrics from targets. | | no
`bearer_token` | `secret` | Bearer token to authenticate with. | | no
`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no
@@ -94,7 +94,7 @@ The following blocks are supported inside the definition of `pyroscope.scrape`:
| profiling_config > profile.godeltaprof_mutex | [profile.godeltaprof_mutex][] | Collect [godeltaprof][] mutex profiles. | no |
| profiling_config > profile.godeltaprof_block | [profile.godeltaprof_block][] | Collect [godeltaprof][] block profiles. | no |
| profiling_config > profile.custom | [profile.custom][] | Collect custom profiles. | no |
-| clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no |
+| clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_NAME" >}} is running in clustered mode. | no |
The `>` symbol indicates deeper levels of nesting. For example,
`oauth2 > tls_config` refers to a `tls_config` block defined inside
@@ -305,7 +305,7 @@ Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
`enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes
-When the agent is [using clustering][], and `enabled` is set to true,
+When {{< param "PRODUCT_NAME" >}} is [using clustering][], and `enabled` is set to true,
then this `pyroscope.scrape` component instance opts-in to participating in the
cluster to distribute scrape load between all cluster nodes.
@@ -314,11 +314,11 @@ subset per node, where each node is roughly assigned the same number of
targets. If the state of the cluster changes, such as a new node joins, then
the subset of targets to scrape per node will be recalculated.
-When clustering mode is enabled, all agents participating in the cluster must
+When clustering mode is enabled, all {{< param "PRODUCT_ROOT_NAME" >}}s participating in the cluster must
use the same configuration file and have access to the same service discovery
APIs.
-If the agent is _not_ running in clustered mode, this block is a no-op.
+If {{< param "PRODUCT_NAME" >}} is _not_ running in clustered mode, this block is a no-op.
[using clustering]: {{< relref "../../concepts/clustering.md" >}}
@@ -354,7 +354,7 @@ label `__address__` _must always_ be present and corresponds to the
`:` that is used for the scrape request.
The special label `service_name` is required and must always be present. If it's not specified, it is
-attempted to be inferred from multiple sources:
+attempted to be inferred from multiple sources:
- `__meta_kubernetes_pod_annotation_pyroscope_io_service_name` which is a `pyroscope.io/service_name` pod annotation.
- `__meta_kubernetes_namespace` and `__meta_kubernetes_pod_container_name`
- `__meta_docker_container_name`
@@ -392,7 +392,7 @@ can help pin down a scrape target.
## Example
-The following example sets up the scrape job with certain attributes (profiling config, targets) and lets it scrape two local applications (the Agent itself and Pyroscope).
+The following example sets up the scrape job with certain attributes (profiling configuration, targets) and lets it scrape two local applications ({{< param "PRODUCT_ROOT_NAME" >}} itself and Pyroscope).
The exposed profiles are sent over to the provided list of receivers, as defined by other components.
```river
diff --git a/docs/sources/flow/reference/components/pyroscope.write.md b/docs/sources/flow/reference/components/pyroscope.write.md
index 45ce439e338e..90e6c6c71d6c 100644
--- a/docs/sources/flow/reference/components/pyroscope.write.md
+++ b/docs/sources/flow/reference/components/pyroscope.write.md
@@ -21,7 +21,7 @@ to a series of user-supplied endpoints using [Pyroscope' Push API](/oss/pyroscop
Multiple `pyroscope.write` components can be specified by giving them
different labels.
-## Usage for Grafana Agent flow mode
+## Usage
```river
pyroscope.write "LABEL" {
diff --git a/docs/sources/flow/reference/components/remote.kubernetes.configmap.md b/docs/sources/flow/reference/components/remote.kubernetes.configmap.md
index aba9af2b33f3..c71b312c6149 100644
--- a/docs/sources/flow/reference/components/remote.kubernetes.configmap.md
+++ b/docs/sources/flow/reference/components/remote.kubernetes.configmap.md
@@ -11,7 +11,7 @@ title: remote.kubernetes.configmap
`remote.kubernetes.configmap` reads a ConfigMap from the Kubernetes API server and exposes its data for other components to consume.
-This can be useful anytime the agent needs data from a ConfigMap that is not directly mounted to the Grafana Agent pod.
+This can be useful anytime {{< param "PRODUCT_NAME" >}} needs data from a ConfigMap that is not directly mounted to the {{< param "PRODUCT_ROOT_NAME" >}} pod.
## Usage
@@ -68,7 +68,7 @@ refers to a `basic_auth` block defined inside a `client` block.
### client block
The `client` block configures the Kubernetes client used to discover Probes. If the `client` block isn't provided, the default in-cluster
-configuration with the service account of the running Grafana Agent pod is
+configuration with the service account of the running {{< param "PRODUCT_ROOT_NAME" >}} pod is
used.
The following arguments are supported:
diff --git a/docs/sources/flow/reference/components/remote.kubernetes.secret.md b/docs/sources/flow/reference/components/remote.kubernetes.secret.md
index d3996715c772..5a64fc1b0f9f 100644
--- a/docs/sources/flow/reference/components/remote.kubernetes.secret.md
+++ b/docs/sources/flow/reference/components/remote.kubernetes.secret.md
@@ -11,7 +11,7 @@ title: remote.kubernetes.secret
`remote.kubernetes.secret` reads a Secret from the Kubernetes API server and exposes its data for other components to consume.
-A common use case for this is loading credentials or other information from secrets that are not already mounted into the agent pod at deployment time.
+A common use case for this is loading credentials or other information from secrets that are not already mounted into the {{< param "PRODUCT_ROOT_NAME" >}} pod at deployment time.
## Usage
@@ -68,8 +68,7 @@ refers to a `basic_auth` block defined inside a `client` block.
### client block
The `client` block configures the Kubernetes client used to discover Probes. If the `client` block isn't provided, the default in-cluster
-configuration with the service account of the running Grafana Agent pod is
-used.
+configuration with the service account of the running {{< param "PRODUCT_ROOT_NAME" >}} pod is used.
The following arguments are supported:
diff --git a/docs/sources/flow/reference/config-blocks/_index.md b/docs/sources/flow/reference/config-blocks/_index.md
index e757c4ccebe6..bf528e3a16e5 100644
--- a/docs/sources/flow/reference/config-blocks/_index.md
+++ b/docs/sources/flow/reference/config-blocks/_index.md
@@ -13,7 +13,7 @@ weight: 200
# Configuration blocks
Configuration blocks are optional top-level blocks that can be used to
-configure various parts of the Grafana Agent process. Each config block can
+configure various parts of the {{< param "PRODUCT_NAME" >}} process. Each configuration block can
only be defined once.
Configuration blocks are _not_ components, so they have no exports.
diff --git a/docs/sources/flow/reference/config-blocks/argument.md b/docs/sources/flow/reference/config-blocks/argument.md
index 33817d148e2f..3e2f4e1a0153 100644
--- a/docs/sources/flow/reference/config-blocks/argument.md
+++ b/docs/sources/flow/reference/config-blocks/argument.md
@@ -17,7 +17,7 @@ input to a [Module][Modules]. `argument` blocks must be given a label which
determines the name of the argument.
The `argument` block may not be specified in the main configuration file given
-to Grafana Agent Flow.
+to {{< param "PRODUCT_NAME" >}}.
[Modules]: {{< relref "../../concepts/modules.md" >}}
@@ -35,11 +35,11 @@ argument "ARGUMENT_NAME" {}
The following arguments are supported:
-Name | Type | Description | Default | Required
----- | ---- | ----------- | ------- | --------
-`optional` | `bool` | Whether the argument may be omitted. | `false` | no
-`comment` | `string` | Description for the argument. | `false` | no
-`default` | `any` | Default value for the argument. | `null` | no
+Name | Type | Description | Default | Required
+-----------|----------|--------------------------------------|---------|---------
+`comment` | `string` | Description for the argument. | `false` | no
+`default` | `any` | Default value for the argument. | `null` | no
+`optional` | `bool` | Whether the argument may be omitted. | `false` | no
By default, all module arguments are required. The `optional` argument can be
used to mark the module argument as optional. When `optional` is `true`, the
@@ -59,7 +59,7 @@ value provided by the module loader.
## Example
-This example creates a module where agent metrics are collected. Collected
+This example creates a module where {{< param "PRODUCT_NAME" >}} metrics are collected. Collected
metrics are then forwarded to the argument specified by the loader:
```river
diff --git a/docs/sources/flow/reference/config-blocks/export.md b/docs/sources/flow/reference/config-blocks/export.md
index 3c0a019d865b..950455ffbbf4 100644
--- a/docs/sources/flow/reference/config-blocks/export.md
+++ b/docs/sources/flow/reference/config-blocks/export.md
@@ -12,12 +12,10 @@ title: export block
# export block
-`export` is an optional configuration block used to specify an emitted value of
-a [Module][Modules]. `export` blocks must be given a label which determine the
-name of the export.
+`export` is an optional configuration block used to specify an emitted value of a [Module][Modules].
+`export` blocks must be given a label which determine the name of the export.
-The `export` block may not be specified in the main configuration file given
-to Grafana Agent Flow.
+The `export` block may not be specified in the main configuration file given to {{< param "PRODUCT_NAME" >}}.
[Modules]: {{< relref "../../concepts/modules.md" >}}
@@ -33,22 +31,20 @@ export "ARGUMENT_NAME" {
The following arguments are supported:
-Name | Type | Description | Default | Required
----- | ---- | ----------- | ------- | --------
-`value` | `any` | Value to export. | yes
+Name | Type | Description | Default | Required
+--------|-------|------------------|---------|---------
+`value` | `any` | Value to export. | | yes
-The `value` argument determines what the value of the export will be. To expose
-an exported field of another component to the module loader, set `value` to an
-expression which references that exported value.
+The `value` argument determines what the value of the export will be.
+To expose an exported field of another component to the module loader, set `value` to an expression which references that exported value.
## Exported fields
-The `export` block does not export any fields.
+The `export` block doesn't export any fields.
## Example
-This example creates a module where the output of discovering Kubernetes pods
-and nodes are exposed to the module loader:
+This example creates a module where the output of discovering Kubernetes pods and nodes are exposed to the module loader:
```river
discovery.kubernetes "pods" {
diff --git a/docs/sources/flow/reference/config-blocks/http.md b/docs/sources/flow/reference/config-blocks/http.md
index 6caa6fb30b1a..39ffa5b2502c 100644
--- a/docs/sources/flow/reference/config-blocks/http.md
+++ b/docs/sources/flow/reference/config-blocks/http.md
@@ -12,9 +12,8 @@ title: http block
# http block
-`http` is an optional configuration block used to customize how Grafana Agent's
-HTTP server functions. `http` is specified without a label and can only be
-provided once per configuration file.
+`http` is an optional configuration block used to customize how the {{< param "PRODUCT_NAME" >}} HTTP server functions.
+`http` is specified without a label and can only be provided once per configuration file.
## Example
@@ -29,19 +28,18 @@ http {
## Arguments
-The `http` block supports no arguments and is configured completely through
-inner blocks.
+The `http` block supports no arguments and is configured completely through inner blocks.
## Blocks
The following blocks are supported inside the definition of `http`:
-Hierarchy | Block | Description | Required
---------- |--------------------------------|---------------------------------------------------------------| --------
-tls | [tls][] | Define TLS settings for the HTTP server. | no
-tls > windows_certificate_filter | [windows_certificate_filter][] | Configure Windows certificate store for all certificates. | no
-tls > windows_certificate_filter > server | [server][] | Configure server certificates for Windows certificate filter. | no
+Hierarchy | Block | Description | Required
+------------------------------------------|--------------------------------|---------------------------------------------------------------|---------
+tls | [tls][] | Define TLS settings for the HTTP server. | no
+tls > windows_certificate_filter | [windows_certificate_filter][] | Configure Windows certificate store for all certificates. | no
tls > windows_certificate_filter > client | [client][] | Configure client certificates for Windows certificate filter. | no
+tls > windows_certificate_filter > server | [server][] | Configure server certificates for Windows certificate filter. | no
[tls]: #tls-block
[windows_certificate_filter]: #windows-certificate-filter-block
@@ -53,14 +51,10 @@ tls > windows_certificate_filter > client | [client][] | Con
The `tls` block configures TLS settings for the HTTP server.
{{% admonition type="warning" %}}
-If you add the `tls` block and reload the configuration when Grafana
-Agent is running, existing connections will continue communicating over
-plaintext. Similarly, if you remove the `tls` block and reload the configuration
-when Grafana Agent is running, existing connections will continue
-communicating over TLS.
-
-To ensure all connections use TLS, configure the `tls` block before you start
-Grafana Agent.
+If you add the `tls` block and reload the configuration when {{< param "PRODUCT_NAME" >}} is running, existing connections will continue communicating over plaintext.
+Similarly, if you remove the `tls` block and reload the configuration when {{< param "PRODUCT_NAME" >}} is running, existing connections will continue communicating over TLS.
+
+To ensure all connections use TLS, configure the `tls` block before you start {{< param "PRODUCT_NAME" >}}.
{{% /admonition %}}
Name | Type | Description | Default | Required
@@ -178,13 +172,13 @@ will serve the found certificate even if it is not compatible with the specified
The `server` block is used to find the certificate to check the signer. If multiple certificates are found the
`windows_certificate_filter` will choose the certificate with the expiration farthest in the future.
-Name | Type | Description | Default | Required
----- |----------------|-------------------------------------------------------------------------------------------|---------| --------
-`store` | `string` | Name of the system store to look for the server Certificate, for example, LocalMachine, CurrentUser. | `""` | yes
-`system_store` | `string` | Name of the store to look for the server Certificate, for example, My, CA. | `""` | yes
-`issuer_common_names` | `list(string)` | Issuer common names to check against. | | no
-`template_id` | `string` | Server Template ID to match in ASN1 format, for example, "1.2.3". | `""` | no
-`refresh_interval` | `string` | How often to check for a new server certificate. | `"5m"` | no
+Name | Type | Description | Default | Required
+----------------------|----------------|------------------------------------------------------------------------------------------------------|---------|---------
+`store` | `string` | Name of the system store to look for the server Certificate, for example, LocalMachine, CurrentUser. | `""` | yes
+`system_store` | `string` | Name of the store to look for the server Certificate, for example, My, CA. | `""` | yes
+`issuer_common_names` | `list(string)` | Issuer common names to check against. | | no
+`template_id` | `string` | Server Template ID to match in ASN1 format, for example, "1.2.3". | `""` | no
+`refresh_interval` | `string` | How often to check for a new server certificate. | `"5m"` | no
@@ -192,9 +186,8 @@ Name | Type | Description
The `client` block is used to check the certificate presented to the server.
-Name | Type | Description | Default | Required
----- |----------------|--------------------------------------------------------|-----| --------
-`issuer_common_names` | `list(string)` | Issuer common names to check against. | | no
-`subject_regex` | `string` | Regular expression to match Subject name. | `""` | no
-`template_id` | `string` | Client Template ID to match in ASN1 format, for example, "1.2.3". | `""` | no
-
+Name | Type | Description | Default | Required
+----------------------|----------------|-------------------------------------------------------------------|---------|---------
+`issuer_common_names` | `list(string)` | Issuer common names to check against. | | no
+`subject_regex` | `string` | Regular expression to match Subject name. | `""` | no
+`template_id` | `string` | Client Template ID to match in ASN1 format, for example, "1.2.3". | `""` | no
diff --git a/docs/sources/flow/reference/config-blocks/logging.md b/docs/sources/flow/reference/config-blocks/logging.md
index d8e526094774..23f3e84e90e8 100644
--- a/docs/sources/flow/reference/config-blocks/logging.md
+++ b/docs/sources/flow/reference/config-blocks/logging.md
@@ -12,9 +12,8 @@ title: logging block
# logging block
-`logging` is an optional configuration block used to customize how Grafana
-Agent produces log messages. `logging` is specified without a label and can
-only be provided once per configuration file.
+`logging` is an optional configuration block used to customize how {{< param "PRODUCT_NAME" >}} produces log messages.
+`logging` is specified without a label and can only be provided once per configuration file.
## Example
@@ -29,11 +28,11 @@ logging {
The following arguments are supported:
-Name | Type | Description | Default | Required
----- | ---- | ----------- | ------- | --------
-`level` | `string` | Level at which log lines should be written | `"info"` | no
-`format` | `string` | Format to use for writing log lines | `"logfmt"` | no
-`write_to` | `list(LogsReceiver)` | List of receivers to send log entries to | | no
+Name | Type | Description | Default | Required
+-----------|----------------------|--------------------------------------------|------------|---------
+`level` | `string` | Level at which log lines should be written | `"info"` | no
+`format` | `string` | Format to use for writing log lines | `"logfmt"` | no
+`write_to` | `list(LogsReceiver)` | List of receivers to send log entries to | | no
### Log level
@@ -55,27 +54,19 @@ The following strings are recognized as valid log line formats:
### Log receivers
-The `write_to` argument allows the Agent to tee its log entries to one or more
-`loki.*` component log receivers in addition to the default [location][].
-This, for example can be the export of a `loki.write` component to ship log
-entries directly to Loki, or a `loki.relabel` component to add a certain label
-first.
+The `write_to` argument allows {{< param "PRODUCT_NAME" >}} to tee its log entries to one or more `loki.*` component log receivers in addition to the default [location][].
+This, for example can be the export of a `loki.write` component to ship log entries directly to Loki, or a `loki.relabel` component to add a certain label first.
[location]: #log-location
## Log location
-Grafana Agent writes all logs to `stderr`.
+{{< param "PRODUCT_NAME" >}} writes all logs to `stderr`.
-When running Grafana Agent as a systemd service, view logs written to `stderr`
-through `journald`.
+When running {{< param "PRODUCT_NAME" >}} as a systemd service, view logs written to `stderr` through `journald`.
-When running Grafana Agent as a container, view logs written to `stderr`
-through `docker logs` or `kubectl logs`, depending on whether Docker or
-Kubernetes was used for deploying the agent.
+When running {{< param "PRODUCT_NAME" >}} as a container, view logs written to `stderr` through `docker logs` or `kubectl logs`, depending on whether Docker or Kubernetes was used for deploying {{< param "PRODUCT_NAME" >}}.
-When running Grafana Agent as a Windows service, logs are instead written as
-event logs; view logs through Event Viewer.
+When running {{< param "PRODUCT_NAME" >}} as a Windows service, logs are instead written as event logs. You can view the logs through Event Viewer.
-In other cases, redirect `stderr` of the Grafana Agent process to a file for
-logs to persist on disk.
+In other cases, redirect `stderr` of the {{< param "PRODUCT_NAME" >}} process to a file for logs to persist on disk.
diff --git a/docs/sources/flow/reference/config-blocks/tracing.md b/docs/sources/flow/reference/config-blocks/tracing.md
index b24d34ecbbfc..860c8e4c7984 100644
--- a/docs/sources/flow/reference/config-blocks/tracing.md
+++ b/docs/sources/flow/reference/config-blocks/tracing.md
@@ -12,9 +12,8 @@ title: tracing block
# tracing block
-`tracing` is an optional configuration block used to customize how Grafana Agent
-produces traces. `tracing` is specified without a label and can only be provided
-once per configuration file.
+`tracing` is an optional configuration block used to customize how {{< param "PRODUCT_NAME" >}} produces traces.
+`tracing` is specified without a label and can only be provided once per configuration file.
## Example
@@ -41,10 +40,10 @@ otelcol.exporter.otlp "tempo" {
The following arguments are supported:
-Name | Type | Description | Default | Required
----- | ---- | ----------- | ------- | --------
-`sampling_fraction` | `number` | Fraction of traces to keep. | `0.1` | no
-`write_to` | `list(otelcol.Consumer)` | Inputs from `otelcol` components to send traces to. | `[]` | no
+Name | Type | Description | Default | Required
+--------------------|--------------------------|-----------------------------------------------------|---------|---------
+`sampling_fraction` | `number` | Fraction of traces to keep. | `0.1` | no
+`write_to` | `list(otelcol.Consumer)` | Inputs from `otelcol` components to send traces to. | `[]` | no
The `write_to` argument controls which components to send traces to for
processing. The elements in the array can be any `otelcol` component that
@@ -63,10 +62,10 @@ kept.
The following blocks are supported inside the definition of `tracing`:
-Hierarchy | Block | Description | Required
---------- | ----- | ----------- | --------
-sampler | [sampler][] | Define custom sampling on top of the base sampling fraction. | no
-sampler > jaeger_remote | [jaeger_remote][] | Retrieve sampling information via a Jaeger remote sampler. | no
+Hierarchy | Block | Description | Required
+------------------------|-------------------|--------------------------------------------------------------|---------
+sampler | [sampler][] | Define custom sampling on top of the base sampling fraction. | no
+sampler > jaeger_remote | [jaeger_remote][] | Retrieve sampling information via a Jaeger remote sampler. | no
The `>` symbol indicates deeper levels of nesting. For example, `sampler >
jaeger_remote` refers to a `jaeger_remote` block defined inside an `sampler`
@@ -88,14 +87,19 @@ It is invalid to define more than one sampler to use in the `sampler` block.
The `jaeger_remote` block configures the retrieval of sampling information
through a remote server that exposes Jaeger sampling strategies.
-Name | Type | Description | Default | Required
----- | ---- | ----------- | ------- | --------
-`url` | `string` | URL to retrieve sampling strategies from. | `"http://127.0.0.1:5778/sampling"` | no
-`max_operations` | `number` | Limit number of operations which can have custom sampling. | `256` | no
-`refresh_interval` | `duration` | Frequency to poll the URL for new sampling strategies. | `"1m"` | no
+Name | Type | Description | Default | Required
+-------------------|------------|------------------------------------------------------------|------------------------------------|---------
+`url` | `string` | URL to retrieve sampling strategies from. | `"http://127.0.0.1:5778/sampling"` | no
+`max_operations` | `number` | Limit number of operations which can have custom sampling. | `256` | no
+`refresh_interval` | `duration` | Frequency to poll the URL for new sampling strategies. | `"1m"` | no
The remote sampling strategies are retrieved from the URL specified by the
-`url` argument, and polled for updates on a timer. The frequency for how often
+`url` argument, and polled for updates on a timer. The frequency for how oftenName | Type | Description | Default | Required
+---- | ---- | ----------- | ------- | --------
+`names` | `list(string)` | DNS names to look up. | | yes
+`port` | `number` | Port to use for collecting metrics. Not used for SRV records. | `0` | no
+`refresh_interval` | `duration` | How often to query DNS for updates. | `"30s"` | no
+`type` | `string` | Type of DNS record to query. Must be one of SRV, A, AAAA, or MX. | `"SRV"` | no
polling occurs is controlled by the `refresh_interval` argument.
Requests to the remote sampling strategies server are made through an HTTP
diff --git a/docs/sources/flow/reference/stdlib/constants.md b/docs/sources/flow/reference/stdlib/constants.md
index 44753d08d26d..3caf5c336a7c 100644
--- a/docs/sources/flow/reference/stdlib/constants.md
+++ b/docs/sources/flow/reference/stdlib/constants.md
@@ -13,12 +13,12 @@ title: constants
# constants
The `constants` object exposes a list of constant values about the system
-Grafana Agent is running on:
+{{< param "PRODUCT_NAME" >}} is running on:
-* `constants.hostname`: The hostname of the machine Grafana Agent is running
+* `constants.hostname`: The hostname of the machine {{< param "PRODUCT_NAME" >}} is running
on.
-* `constants.os`: The operating system Grafana Agent is running on.
-* `constants.arch`: The architecture of the system Grafana Agent is running on.
+* `constants.os`: The operating system {{< param "PRODUCT_NAME" >}} is running on.
+* `constants.arch`: The architecture of the system {{< param "PRODUCT_NAME" >}} is running on.
## Examples
diff --git a/docs/sources/flow/reference/stdlib/env.md b/docs/sources/flow/reference/stdlib/env.md
index fd4d91fcefbb..49a65d1a6a8b 100644
--- a/docs/sources/flow/reference/stdlib/env.md
+++ b/docs/sources/flow/reference/stdlib/env.md
@@ -12,9 +12,8 @@ title: env
# env
-The `env` function gets the value of an environment variable from the system
-Grafana Agent is running on. If the environment variable does not exist, `env`
-returns an empty string.
+The `env` function gets the value of an environment variable from the system {{< param "PRODUCT_NAME" >}} is running on.
+If the environment variable does not exist, `env` returns an empty string.
## Examples
diff --git a/docs/sources/flow/reference/stdlib/format.md b/docs/sources/flow/reference/stdlib/format.md
index fb725b136a1c..be5d9cd754c1 100644
--- a/docs/sources/flow/reference/stdlib/format.md
+++ b/docs/sources/flow/reference/stdlib/format.md
@@ -58,9 +58,9 @@ The specification may contain the following verbs.
| `%%` | Literal percent sign, consuming no value. |
| `%t` | Convert to boolean and produce `true` or `false`. |
| `%b` | Convert to integer number and produce binary representation. |
-| `%d` | Convert to integer and produce decimal representation. |
-| `%o` | Convert to integer and produce octal representation. |
-| `%x` | Convert to integer and produce hexadecimal representation with lowercase letters. |
+| `%d` | Convert to integer and produce decimal representation. |
+| `%o` | Convert to integer and produce octal representation. |
+| `%x` | Convert to integer and produce hexadecimal representation with lowercase letters. |
| `%X` | Like `%x`, but use uppercase letters. |
| `%e` | Convert to number and produce scientific notation, like `-1.234456e+78`. |
| `%E` | Like `%e`, but use an uppercase `E` to introduce the exponent. |
diff --git a/docs/sources/flow/release-notes.md b/docs/sources/flow/release-notes.md
index 7c5b5aaeb7a9..ce731376aaed 100644
--- a/docs/sources/flow/release-notes.md
+++ b/docs/sources/flow/release-notes.md
@@ -6,21 +6,21 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/release-notes/
- /docs/grafana-cloud/send-data/agent/flow/release-notes/
canonical: https://grafana.com/docs/agent/latest/flow/release-notes/
-description: Release notes for Grafana Agent flow mode
+description: Release notes for Grafana Agent Flow
menuTitle: Release notes
-title: Release notes for Grafana Agent flow mode
+title: Release notes for Grafana Agent Flow
weight: 999
---
-# Release notes for Grafana Agent flow mode
+# Release notes for {{< param "PRODUCT_NAME" >}}
-The release notes provide information about deprecations and breaking changes in Grafana Agent flow mode.
+The release notes provide information about deprecations and breaking changes in {{< param "PRODUCT_NAME" >}}.
-For a complete list of changes to Grafana Agent, with links to pull requests and related issues when available, refer to the [Changelog](https://github.com/grafana/agent/blob/main/CHANGELOG.md).
+For a complete list of changes to {{< param "PRODUCT_ROOT_NAME" >}}, with links to pull requests and related issues when available, refer to the [Changelog](https://github.com/grafana/agent/blob/main/CHANGELOG.md).
{{% admonition type="note" %}}
-These release notes are specific to Grafana Agent flow mode.
-Other release notes for the different Grafana Agent variants are contained on separate pages:
+These release notes are specific to {{< param "PRODUCT_NAME" >}}.
+Other release notes for the different {{< param "PRODUCT_ROOT_NAME" >}} variants are contained on separate pages:
* [Static mode release notes][release-notes-static]
* [Static mode Kubernetes operator release notes][release-notes-operator]
@@ -41,7 +41,7 @@ supports OTLP.
### Breaking change: Renamed `non_indexed_labels` Loki processing stage to `structured_metadata`.
-If you use the Loki processing stage in your Agent configuration, you must rename the `non_indexed_labels` pipeline stage definition to `structured_metadata`.
+If you use the Loki processing stage in your {{< param "PRODUCT_NAME" >}} configuration, you must rename the `non_indexed_labels` pipeline stage definition to `structured_metadata`.
Old configuration example:
@@ -58,7 +58,7 @@ stage.structured_metadata {
}
```
-### Breaking change: `otelcol.exporter.prometheus` scope labels updated.
+### Breaking change: `otelcol.exporter.prometheus` scope labels updated
There are 2 changes to the way scope labels work for this component.
@@ -92,7 +92,7 @@ prometheus.exporter.unix "example" { /* ... */ }
### Breaking change: The default value of `retry_on_http_429` is changed to `true` for the `queue_config` in `prometheus.remote_write`
The default value of `retry_on_http_429` is changed from `false` to `true` for the `queue_config` block in `prometheus.remote_write`
-so that the agent can retry sending and avoid data being lost for metric pipelines by default.
+so that {{< param "PRODUCT_ROOT_NAME" >}} can retry sending and avoid data being lost for metric pipelines by default.
* If you set the `retry_on_http_429` explicitly - no action is required.
* If you do not set `retry_on_http_429` explicitly and you do *not* want to retry on HTTP 429, make sure you set it to `false` as you upgrade to this new version.
@@ -108,12 +108,12 @@ format. By default, the decompression of files is entirely disabled.
How to migrate:
-* If your agent never reads logs from files with
+* If {{< param "PRODUCT_NAME" >}} never reads logs from files with
extensions `.gz`, `.tar.gz`, `.z` or `.bz2` then no action is required.
- > You can check what are the file extensions your agent reads from by looking
+ > You can check what are the file extensions {{< param "PRODUCT_NAME" >}} reads from by looking
at the `path` label on `loki_source_file_file_bytes_total` metric.
-* If your agent extracts data from compressed files, please add the following
+* If {{< param "PRODUCT_NAME" >}} extracts data from compressed files, please add the following
configuration block to your `loki.source.file` component:
```river
@@ -331,7 +331,7 @@ The change was made in PR [#18070](https://github.com/open-telemetry/opentelemet
The `remote_sampling` block in `otelcol.receiver.jaeger` has been an undocumented no-op configuration for some time, and has now been removed.
Customers are advised to use `otelcol.extension.jaeger_remote_sampling` instead.
-### Deprecation: `otelcol.exporter.jaeger` has been deprecated and will be removed in Agent v0.38.0.
+### Deprecation: `otelcol.exporter.jaeger` has been deprecated and will be removed in {{< param "PRODUCT_NAME" >}} v0.38.0.
This is because Jaeger supports OTLP directly and OpenTelemetry Collector is also removing its
[Jaeger receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/jaegerexporter).
@@ -527,7 +527,7 @@ prometheus.exporter.unix { }
As first announced in v0.30.0, support for using the `EXPERIMENTAL_ENABLE_FLOW`
environment variable to enable Flow mode has been removed.
-To enable Flow mode, set the `AGENT_MODE` environment variable to `flow`.
+To enable {{< param "PRODUCT_NAME" >}}, set the `AGENT_MODE` environment variable to `flow`.
## v0.31
@@ -550,7 +550,7 @@ removed.
### Deprecation: `EXPERIMENTAL_ENABLE_FLOW` environment variable changed
-As part of graduating Grafana Agent Flow to beta, the
+As part of graduating {{< param "PRODUCT_NAME" >}} to beta, the
`EXPERIMENTAL_ENABLE_FLOW` environment variable is replaced by setting
`AGENT_MODE` to `flow`.
diff --git a/docs/sources/flow/setup/_index.md b/docs/sources/flow/setup/_index.md
index fe38e62eb99a..0a1fbe189c2c 100644
--- a/docs/sources/flow/setup/_index.md
+++ b/docs/sources/flow/setup/_index.md
@@ -5,14 +5,14 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/
- /docs/grafana-cloud/send-data/agent/flow/setup/
canonical: https://grafana.com/docs/agent/latest/flow/setup/
-description: Learn how to install and configure Grafana Agent in flow mode
-menuTitle: Set up flow mode
-title: Set up Grafana Agent in flow mode
+description: Learn how to install and configure Grafana Agent Flow
+menuTitle: Set up Grafana Agent Flow
+title: Set up Grafana Agent Flow
weight: 50
---
-# Set up Grafana Agent in flow mode
+# Set up {{< param "PRODUCT_NAME" >}}
-This section includes information that helps you get Grafana Agent in flow mode installed and configured.
+This section includes information that helps you install and configure {{< param "PRODUCT_NAME" >}}.
{{< section >}}
diff --git a/docs/sources/flow/setup/configure/_index.md b/docs/sources/flow/setup/configure/_index.md
index 4af2c196da69..5b468138977a 100644
--- a/docs/sources/flow/setup/configure/_index.md
+++ b/docs/sources/flow/setup/configure/_index.md
@@ -5,20 +5,20 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/
- /docs/grafana-cloud/send-data/agent/flow/setup/configure/
canonical: https://grafana.com/docs/agent/latest/flow/setup/configure/
-description: Configure Grafana Agent in flow mode after it is installed
-menuTitle: Configure flow mode
-title: Configure Grafana Agent in flow mode
+description: Configure Grafana Agent Flow after it is installed
+menuTitle: Configure Grafana Agent Flow
+title: Configure Grafana Agent Flow
weight: 150
---
-# Configure Grafana Agent in flow mode
+# Configure {{< param "PRODUCT_NAME" >}}
-You can configure Grafana Agent in flow mode after it is installed. The default River configuration file for flow mode is located at:
+You can configure {{< param "PRODUCT_NAME" >}} after it is installed. The default River configuration file for {{< param "PRODUCT_NAME" >}} is located at:
* Linux: `/etc/grafana-agent-flow.river`
* macOS: `$(brew --prefix)/etc/grafana-agent-flow/config.river`
* Windows: `C:\Program Files\Grafana Agent Flow\config.river`
-This section includes information that helps you configure Grafana Agent in flow mode.
+This section includes information that helps you configure {{< param "PRODUCT_NAME" >}}.
{{< section >}}
diff --git a/docs/sources/flow/setup/configure/configure-kubernetes.md b/docs/sources/flow/setup/configure/configure-kubernetes.md
index 6a492b1190d9..a68017c3c248 100644
--- a/docs/sources/flow/setup/configure/configure-kubernetes.md
+++ b/docs/sources/flow/setup/configure/configure-kubernetes.md
@@ -5,15 +5,15 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/configure-kubernetes/
- /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-kubernetes/
canonical: https://grafana.com/docs/agent/latest/flow/setup/configure/configure-kubernetes/
-description: Learn how to configure Grafana Agent in flow mode on Kubernetes
+description: Learn how to configure Grafana Agent Flow on Kubernetes
menuTitle: Kubernetes
-title: Configure Grafana Agent in flow mode on Kubernetes
+title: Configure Grafana Agent Flow on Kubernetes
weight: 200
---
-# Configure Grafana Agent in flow mode on Kubernetes
+# Configure {{< param "PRODUCT_NAME" >}} on Kubernetes
-To configure Grafana Agent in flow mode on Kubernetes, perform the following steps:
+To configure {{< param "PRODUCT_NAME" >}} on Kubernetes, perform the following steps:
1. Download a local copy of [values.yaml][] for the Helm chart.
@@ -22,15 +22,13 @@ To configure Grafana Agent in flow mode on Kubernetes, perform the following ste
Refer to the inline documentation in the `values.yaml` for more information about each option.
-1. Run the following command in a terminal to upgrade your Grafana Agent
- installation:
+1. Run the following command in a terminal to upgrade your {{< param "PRODUCT_NAME" >}} installation:
```shell
helm upgrade RELEASE_NAME grafana/grafana-agent -f VALUES_PATH
```
- 1. Replace `RELEASE_NAME` with the name you used for your Grafana Agent
- installation.
+ 1. Replace `RELEASE_NAME` with the name you used for your {{< param "PRODUCT_NAME" >}} installation.
1. Replace `VALUES_PATH` with the path to your copy of `values.yaml` to use.
@@ -43,7 +41,7 @@ when using a `configMapGenerator` to generate the ConfigMap containing the
configuration. By default, the generator appends a hash to the name and patches
the resource mentioning it, triggering a rolling update.
-This behavior is undesirable for Grafana Agent because the startup time can be significant depending on the size of the Write-Ahead Log.
+This behavior is undesirable for {{< param "PRODUCT_NAME" >}} because the startup time can be significant depending on the size of the Write-Ahead Log.
You can use the [Helm chart][] sidecar container to watch the ConfigMap and trigger a dynamic reload.
The following is an example snippet of a `kustomization` that disables this behavior:
diff --git a/docs/sources/flow/setup/configure/configure-linux.md b/docs/sources/flow/setup/configure/configure-linux.md
index 60fb752d15ad..a7446dea9a7b 100644
--- a/docs/sources/flow/setup/configure/configure-linux.md
+++ b/docs/sources/flow/setup/configure/configure-linux.md
@@ -5,15 +5,15 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/configure-linux/
- /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-linux/
canonical: https://grafana.com/docs/agent/latest/flow/setup/configure/configure-linux/
-description: Learn how to configure Grafana Agent in flow mode on Linux
+description: Learn how to configure Grafana Agent Flow on Linux
menuTitle: Linux
-title: Configure Grafana Agent in flow mode on Linux
+title: Configure Grafana Agent Flow on Linux
weight: 300
---
-# Configure Grafana Agent in flow mode on Linux
+# Configure {{< param "PRODUCT_NAME" >}} on Linux
-To configure Grafana Agent in flow mode on Linux, perform the following steps:
+To configure {{< param "PRODUCT_NAME" >}} on Linux, perform the following steps:
1. Edit the default configuration file at `/etc/grafana-agent-flow.river`.
@@ -33,7 +33,7 @@ To change the configuration file used by the service, perform the following step
1. Change the contents of the `CONFIG_FILE` environment variable to point to
the new configuration file to use.
-1. Restart the Grafana Agent service:
+1. Restart the {{< param "PRODUCT_NAME" >}} service:
```shell
sudo systemctl restart grafana-agent-flow
@@ -41,12 +41,12 @@ To change the configuration file used by the service, perform the following step
## Pass additional command-line flags
-By default, the Grafana Agent service launches with the [run][]
+By default, the {{< param "PRODUCT_NAME" >}} service launches with the [run][]
command, passing the following flags:
* `--storage.path=/var/lib/grafana-agent-flow`
-To pass additional command-line flags to the Grafana Agent binary, perform
+To pass additional command-line flags to the {{< param "PRODUCT_NAME" >}} binary, perform
the following steps:
1. Edit the environment file for the service:
@@ -57,7 +57,7 @@ the following steps:
1. Change the contents of the `CUSTOM_ARGS` environment variable to specify
command-line flags to pass.
-1. Restart the Grafana Agent service:
+1. Restart the {{< param "PRODUCT_NAME" >}} service:
```shell
sudo systemctl restart grafana-agent-flow
@@ -68,14 +68,14 @@ refer to the documentation for the [run][] command.
## Expose the UI to other machines
-By default, Grafana Agent listens on the local network for its HTTP
+By default, {{< param "PRODUCT_NAME" >}} listens on the local network for its HTTP
server. This prevents other machines on the network from being able to access
the [UI for debugging][UI].
To expose the UI to other machines, complete the following steps:
1. Follow [Pass additional command-line flags](#pass-additional-command-line-flags)
- to edit command line flags passed to Grafana Agent, including the
+ to edit command line flags passed to {{< param "PRODUCT_NAME" >}}, including the
following customizations:
1. Add the following command line argument to `CUSTOM_ARGS`:
@@ -86,7 +86,7 @@ To expose the UI to other machines, complete the following steps:
Replace `LISTEN_ADDR` with an address which other machines on the
network have access to, like the network IP address of the machine
- Grafana Agent is running on.
+ {{< param "PRODUCT_NAME" >}} is running on.
To listen on all interfaces, replace `LISTEN_ADDR` with `0.0.0.0`.
diff --git a/docs/sources/flow/setup/configure/configure-macos.md b/docs/sources/flow/setup/configure/configure-macos.md
index 210e33b234a3..fd664fc1149d 100644
--- a/docs/sources/flow/setup/configure/configure-macos.md
+++ b/docs/sources/flow/setup/configure/configure-macos.md
@@ -5,33 +5,33 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/configure-macos/
- /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-macos/
canonical: https://grafana.com/docs/agent/latest/flow/setup/configure/configure-macos/
-description: Learn how to configure Grafana Agent in flow mode on macOS
+description: Learn how to configure Grafana Agent Flow on macOS
menuTitle: macOS
-title: Configure Grafana Agent in flow mode on macOS
+title: Configure Grafana Agent Flow on macOS
weight: 400
---
-# Configure Grafana Agent in flow mode on macOS
+# Configure {{< param "PRODUCT_NAME" >}} on macOS
-To configure Grafana Agent in flow mode on macOS, perform the following steps:
+To configure {{< param "PRODUCT_NAME" >}} on macOS, perform the following steps:
1. Edit the default configuration file at `$(brew --prefix)/etc/grafana-agent-flow/config.river`.
-1. Run the following command in a terminal to restart the Grafana Agent service:
+1. Run the following command in a terminal to restart the {{< param "PRODUCT_NAME" >}} service:
```shell
brew services restart grafana-agent-flow
```
-## Configure the Grafana Agent service
+## Configure the {{< param "PRODUCT_NAME" >}} service
{{% admonition type="note" %}}
Due to limitations in Homebrew, customizing the service used by
-Grafana Agent on macOS requires changing the Homebrew formula and
-reinstalling Grafana Agent.
+{{< param "PRODUCT_NAME" >}} on macOS requires changing the Homebrew formula and
+reinstalling {{< param "PRODUCT_NAME" >}}.
{{% /admonition %}}
-To customize the Grafana Agent service on macOS, perform the following
+To customize the {{< param "PRODUCT_NAME" >}} service on macOS, perform the following
steps:
1. Run the following command in a terminal:
@@ -40,23 +40,23 @@ steps:
brew edit grafana-agent-flow
```
- This will open the Grafana Agent Homebrew Formula in an editor.
+ This will open the {{< param "PRODUCT_NAME" >}} Homebrew Formula in an editor.
1. Modify the `service` section as desired to change things such as:
- * The River configuration file used by Grafana Agent.
- * Flags passed to the Grafana Agent binary.
+ * The River configuration file used by {{< param "PRODUCT_NAME" >}}.
+ * Flags passed to the {{< param "PRODUCT_NAME" >}} binary.
* Location of log files.
When you are done, save the file.
-1. Reinstall the Grafana Agent Formula by running the following command in a terminal:
+1. Reinstall the {{< param "PRODUCT_NAME" >}} Formula by running the following command in a terminal:
```shell
brew reinstall grafana-agent-flow
```
-1. Restart the Grafana Agent service by running the command in a terminal:
+1. Restart the {{< param "PRODUCT_NAME" >}} service by running the command in a terminal:
```shell
brew services restart grafana-agent-flow
@@ -64,20 +64,20 @@ steps:
## Expose the UI to other machines
-By default, Grafana Agent listens on the local network for its HTTP
+By default, {{< param "PRODUCT_NAME" >}} listens on the local network for its HTTP
server. This prevents other machines on the network from being able to access
the [UI for debugging][UI].
To expose the UI to other machines, complete the following steps:
-1. Follow [Configure the Grafana Agent service](#configure-the-grafana-agent-service)
- to edit command line flags passed to Grafana Agent, including the
+1. Follow [Configure the {{< param "PRODUCT_NAME" >}} service](#configure-the-grafana-agent-flow-service)
+ to edit command line flags passed to {{< param "PRODUCT_NAME" >}}, including the
following customizations:
1. Modify the line inside the `service` block containing
`--server.http.listen-addr=127.0.0.1:12345`, replacing `127.0.0.1` with
the address which other machines on the network have access to, like the
- network IP address of the machine Grafana Agent is running on.
+ network IP address of the machine {{< param "PRODUCT_NAME" >}} is running on.
To listen on all interfaces, replace `127.0.0.1` with `0.0.0.0`.
diff --git a/docs/sources/flow/setup/configure/configure-windows.md b/docs/sources/flow/setup/configure/configure-windows.md
index 6c4986b7dc35..f62014caac83 100644
--- a/docs/sources/flow/setup/configure/configure-windows.md
+++ b/docs/sources/flow/setup/configure/configure-windows.md
@@ -5,19 +5,19 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/configure-windows/
- /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-windows/
canonical: https://grafana.com/docs/agent/latest/flow/setup/configure/configure-windows/
-description: Learn how to configure Grafana Agent in flow mode on Windows
+description: Learn how to configure Grafana Agent Flow on Windows
menuTitle: Windows
-title: Configure Grafana Agent in flow mode on Windows
+title: Configure Grafana Agent Flow on Windows
weight: 500
---
-# Configure Grafana Agent in flow mode on Windows
+# Configure {{< param "PRODUCT_NAME" >}} on Windows
-To configure Grafana Agent in flow mode on Windows, perform the following steps:
+To configure {{< param "PRODUCT_NAME" >}} on Windows, perform the following steps:
1. Edit the default configuration file at `C:\Program Files\Grafana Agent Flow\config.river`.
-1. Restart the Grafana Agent service:
+1. Restart the {{< param "PRODUCT_NAME" >}} service:
1. Open the Windows Services manager (`services.msc`):
@@ -25,20 +25,20 @@ To configure Grafana Agent in flow mode on Windows, perform the following steps:
1. Type `services.msc` and click **OK**.
- 1. Right click on the service called **Grafana Agent Flow**.
+ 1. Right click on the service called **{{< param "PRODUCT_NAME" >}}**.
1. Click on **All Tasks > Restart**.
## Change command-line arguments
-By default, the Grafana Agent service will launch and pass the
-following arguments to the Grafana Agent binary:
+By default, the {{< param "PRODUCT_NAME" >}} service will launch and pass the
+following arguments to the {{< param "PRODUCT_NAME" >}} binary:
* `run`
* `C:\Program Files\Grafana Agent Flow\config.river`
* `--storage.path=C:\ProgramData\Grafana Agent Flow\data`
-To change the set of command-line arguments passed to the Grafana Agent
+To change the set of command-line arguments passed to the {{< param "PRODUCT_ROOT_NAME" >}}
binary, perform the following steps:
1. Open the Registry Editor:
@@ -51,9 +51,9 @@ binary, perform the following steps:
1. Double-click on the value called **Arguments***.
-1. In the dialog box, enter the new set of arguments to pass to the Grafana Agent binary.
+1. In the dialog box, enter the new set of arguments to pass to the {{< param "PRODUCT_ROOT_NAME" >}} binary.
-1. Restart the Grafana Agent service:
+1. Restart the {{< param "PRODUCT_NAME" >}} service:
1. Open the Windows Services manager (`services.msc`):
@@ -61,20 +61,20 @@ binary, perform the following steps:
1. Type `services.msc` and click **OK**.
- 1. Right click on the service called **Grafana Agent Flow**.
+ 1. Right click on the service called **{{< param "PRODUCT_NAME" >}}**.
1. Click on **All Tasks > Restart**.
## Expose the UI to other machines
-By default, Grafana Agent listens on the local network for its HTTP
+By default, {{< param "PRODUCT_NAME" >}} listens on the local network for its HTTP
server. This prevents other machines on the network from being able to access
the [UI for debugging][UI].
To expose the UI to other machines, complete the following steps:
1. Follow [Change command-line arguments](#change-command-line-arguments)
- to edit command line flags passed to Grafana Agent, including the
+ to edit command line flags passed to {{< param "PRODUCT_NAME" >}}, including the
following customizations:
1. Add the following command line argument:
@@ -85,7 +85,7 @@ To expose the UI to other machines, complete the following steps:
Replace `LISTEN_ADDR` with an address which other machines on the
network have access to, like the network IP address of the machine
- Grafana Agent is running on.
+ {{< param "PRODUCT_NAME" >}} is running on.
To listen on all interfaces, replace `LISTEN_ADDR` with `0.0.0.0`.
diff --git a/docs/sources/flow/setup/deploy-agent.md b/docs/sources/flow/setup/deploy-agent.md
index c55c707b3f8a..8328e03b65b6 100644
--- a/docs/sources/flow/setup/deploy-agent.md
+++ b/docs/sources/flow/setup/deploy-agent.md
@@ -5,11 +5,11 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/deploy-agent/
- /docs/grafana-cloud/send-data/agent/flow/setup/deploy-agent/
canonical: https://grafana.com/docs/agent/latest/flow/setup/start-agent/
-description: Learn about possible deployment topologies for Grafana Agent
-menuTitle: Deploy Grafana Agent
-title: Grafana Agent deployment topologies
+description: Learn about possible deployment topologies for Grafana Agent Flow
+menuTitle: Deploy Grafana Agent Flow
+title: Grafana Agent Flow deployment topologies
weight: 900
---
-{{< docs/shared source="agent" lookup="/deploy-agent.md" version="" >}}
+{{< docs/shared source="agent" lookup="/deploy-agent.md" version="" >}}
diff --git a/docs/sources/flow/setup/install/_index.md b/docs/sources/flow/setup/install/_index.md
index 9142a2a8722c..4c6526600ac4 100644
--- a/docs/sources/flow/setup/install/_index.md
+++ b/docs/sources/flow/setup/install/_index.md
@@ -6,15 +6,15 @@ aliases:
- /docs/grafana-cloud/send-data/agent/flow/setup/install/
- /docs/sources/flow/install/
canonical: https://grafana.com/docs/agent/latest/flow/setup/install/
-description: Learn how to install Grafana Agent in flow mode
-menuTitle: Install flow mode
-title: Install Grafana Agent in flow mode
+description: Learn how to install Grafana Agent Flow
+menuTitle: Install Grafana Agent Flow
+title: Install Grafana Agent Flow
weight: 50
---
-# Install Grafana Agent in flow mode
+# Install {{< param "PRODUCT_NAME" >}}
-You can install Grafana Agent in flow mode on Docker, Kubernetes, Linux, macOS, or Windows.
+You can install {{< param "PRODUCT_NAME" >}} on Docker, Kubernetes, Linux, macOS, or Windows.
The following architectures are supported:
@@ -24,14 +24,14 @@ The following architectures are supported:
- FreeBSD: AMD64
{{% admonition type="note" %}}
-Installing Grafana Agent on other operating systems is possible, but is not recommended or supported.
+Installing {{< param "PRODUCT_NAME" >}} on other operating systems is possible, but is not recommended or supported.
{{% /admonition %}}
{{< section >}}
## Data collection
-By default, Grafana Agent sends anonymous usage information to Grafana Labs. Refer to [data collection][] for more information
+By default, {{< param "PRODUCT_NAME" >}} sends anonymous usage information to Grafana Labs. Refer to [data collection][] for more information
about what data is collected and how you can opt-out.
{{% docs/reference %}}
diff --git a/docs/sources/flow/setup/install/binary.md b/docs/sources/flow/setup/install/binary.md
index 3db7b71967a0..b94402d10280 100644
--- a/docs/sources/flow/setup/install/binary.md
+++ b/docs/sources/flow/setup/install/binary.md
@@ -6,26 +6,26 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/binary/
- /docs/grafana-cloud/send-data/agent/flow/setup/install/binary/
canonical: https://grafana.com/docs/agent/latest/flow/setup/install/binary/
-description: Learn how to install Grafana Agent in flow mode as a standalone binary
+description: Learn how to install Grafana Agent Flow as a standalone binary
menuTitle: Standalone
-title: Install Grafana Agent in flow mode as a standalone binary
+title: Install Grafana Agent Flow as a standalone binary
weight: 600
---
-# Install Grafana Agent in flow mode as a standalone binary
+# Install {{< param "PRODUCT_NAME" >}} as a standalone binary
-Grafana Agent is distributed as a standalone binary for the following operating systems and architectures:
+{{< param "PRODUCT_NAME" >}} is distributed as a standalone binary for the following operating systems and architectures:
* Linux: AMD64, ARM64
* Windows: AMD64
* macOS: AMD64 (Intel), ARM64 (Apple Silicon)
* FreeBSD: AMD64
-## Download Grafana Agent
+## Download {{< param "PRODUCT_ROOT_NAME" >}}
-To download Grafana Agent as a standalone binary, perform the following steps.
+To download {{< param "PRODUCT_NAME" >}} as a standalone binary, perform the following steps.
-1. Navigate to the current Grafana Agent [release](https://github.com/grafana/agent/releases) page.
+1. Navigate to the current {{< param "PRODUCT_ROOT_NAME" >}} [release](https://github.com/grafana/agent/releases) page.
1. Scroll down to the **Assets** section.
@@ -33,7 +33,7 @@ To download Grafana Agent as a standalone binary, perform the following steps.
1. Extract the package contents into a directory.
-1. If you are installing Grafana Agent on Linux, macOS, or FreeBSD, run the following command in a terminal:
+1. If you are installing {{< param "PRODUCT_NAME" >}} on Linux, macOS, or FreeBSD, run the following command in a terminal:
```shell
chmod +x BINARY_PATH
@@ -43,12 +43,11 @@ To download Grafana Agent as a standalone binary, perform the following steps.
## Next steps
-* [Start Grafana Agent][]
-* [Configure Grafana Agent][]
+{{< param "PRODUCT_NAME" >}}
{{% docs/reference %}}
-[Start Grafana Agent]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md#standalone-binary"
-[Start Grafana Agent]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md#standalone-binary"
-[Configure Grafana Agent]: "/docs/agent/ -> /docs/agent//flow/setup/configure"
-[Configure Grafana Agent]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/configure/"
+[Start]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md#standalone-binary"
+[Start]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md#standalone-binary"
+[Configure]: "/docs/agent/ -> /docs/agent//flow/setup/configure"
+[Configure]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/configure/"
{{% /docs/reference %}}
diff --git a/docs/sources/flow/setup/install/docker.md b/docs/sources/flow/setup/install/docker.md
index 99c11a61e123..7b7cef043a2b 100644
--- a/docs/sources/flow/setup/install/docker.md
+++ b/docs/sources/flow/setup/install/docker.md
@@ -6,15 +6,15 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/docker/
- /docs/grafana-cloud/send-data/agent/flow/setup/install/docker/
canonical: https://grafana.com/docs/agent/latest/flow/setup/install/docker/
-description: Learn how to install Grafana Agent in flow mode on Docker
+description: Learn how to install Grafana Agent Flow on Docker
menuTitle: Docker
-title: Run Grafana Agent in flow mode in a Docker container
+title: Run Grafana Agent Flow in a Docker container
weight: 100
---
-# Run Grafana Agent in flow mode in a Docker container
+# Run {{< param "PRODUCT_NAME" >}} in a Docker container
-Grafana Agent is available as a Docker container image on the following platforms:
+{{< param "PRODUCT_NAME" >}} is available as a Docker container image on the following platforms:
* [Linux containers][] for AMD64 and ARM64.
* [Windows containers][] for AMD64.
@@ -22,7 +22,7 @@ Grafana Agent is available as a Docker container image on the following platform
## Before you begin
* Install [Docker][] on your computer.
-* Create and save a Grafana Agent River configuration file on your computer, for example:
+* Create and save a {{< param "PRODUCT_NAME" >}} River configuration file on your computer, for example:
```river
logging {
@@ -33,7 +33,7 @@ Grafana Agent is available as a Docker container image on the following platform
## Run a Linux Docker container
-To run Grafana Agent in flow mode as a Linux Docker container, run the following command in a terminal window:
+To run {{< param "PRODUCT_NAME" >}} as a Linux Docker container, run the following command in a terminal window:
```shell
docker run \
@@ -46,7 +46,7 @@ docker run \
Replace `CONFIG_FILE_PATH` with the path of the configuration file on your host system.
-You can modify the last line to change the arguments passed to the Grafana Agent binary.
+You can modify the last line to change the arguments passed to the {{< param "PRODUCT_NAME" >}} binary.
Refer to the documentation for [run][] for more information about the options available to the `run` command.
> **Note:** Make sure you pass `--server.http.listen-addr=0.0.0.0:12345` as an argument as shown in the example above.
@@ -55,7 +55,7 @@ Refer to the documentation for [run][] for more information about the options av
## Run a Windows Docker container
-To run Grafana Agent in flow mode as a Windows Docker container, run the following command in a terminal window:
+To run {{< param "PRODUCT_NAME" >}} as a Windows Docker container, run the following command in a terminal window:
```shell
docker run \
@@ -68,7 +68,7 @@ docker run \
Replace `CONFIG_FILE_PATH` with the path of the configuration file on your host system.
-You can modify the last line to change the arguments passed to the Grafana Agent binary.
+You can modify the last line to change the arguments passed to the {{< param "PRODUCT_NAME" >}} binary.
Refer to the documentation for [run][] for more information about the options available to the `run` command.
@@ -77,7 +77,7 @@ Refer to the documentation for [run][] for more information about the options av
## Verify
-To verify that Grafana Agent is running successfully, navigate to and make sure the [Grafana Agent UI][UI] loads without error.
+To verify that {{< param "PRODUCT_NAME" >}} is running successfully, navigate to and make sure the {{< param "PRODUCT_NAME" >}} [UI][] loads without error.
[Linux containers]: #run-a-linux-docker-container
[Windows containers]: #run-a-windows-docker-container
diff --git a/docs/sources/flow/setup/install/kubernetes.md b/docs/sources/flow/setup/install/kubernetes.md
index 3909bd4462dc..7c042ef27d91 100644
--- a/docs/sources/flow/setup/install/kubernetes.md
+++ b/docs/sources/flow/setup/install/kubernetes.md
@@ -6,30 +6,30 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/kubernetes/
- /docs/grafana-cloud/send-data/agent/flow/setup/install/kubernetes/
canonical: https://grafana.com/docs/agent/latest/flow/setup/install/kubernetes/
-description: Learn how to deploy Grafana Agent in flow mode on Kubernetes
+description: Learn how to deploy Grafana Agent Flow on Kubernetes
menuTitle: Kubernetes
-title: Deploy Grafana Agent in flow mode on Kubernetes
+title: Deploy Grafana Agent Flow on Kubernetes
weight: 200
---
-# Deploy Grafana Agent in flow mode on Kubernetes
+# Deploy {{< param "PRODUCT_NAME" >}} on Kubernetes
-Grafana Agent can be deployed on Kubernetes by using the Helm chart for Grafana Agent.
+{{< param "PRODUCT_NAME" >}} can be deployed on Kubernetes by using the Helm chart for {{< param "PRODUCT_ROOT_NAME" >}}.
## Before you begin
* Install [Helm][] on your computer.
-* Configure a Kubernetes cluster that you can use for Grafana Agent.
+* Configure a Kubernetes cluster that you can use for {{< param "PRODUCT_NAME" >}}.
* Configure your local Kubernetes context to point to the cluster.
## Deploy
{{% admonition type="note" %}}
-These instructions show you how to install the generic [Helm chart](https://github.com/grafana/agent/tree/main/operations/helm/charts/grafana-agent) for Grafana
-Agent. You can deploy Grafana Agent either in static mode or flow mode. The Helm chart deploys Grafana Agent in flow mode by default.
+These instructions show you how to install the generic [Helm chart](https://github.com/grafana/agent/tree/main/operations/helm/charts/grafana-agent) for {{< param "PRODUCT_NAME" >}}.
+You can deploy {{< param "PRODUCT_ROOT_NAME" >}} either in static mode or flow mode. The Helm chart deploys {{< param "PRODUCT_NAME" >}} by default.
{{% /admonition %}}
-To deploy Grafana Agent on Kubernetes using Helm, run the following commands in a terminal window:
+To deploy {{< param "PRODUCT_ROOT_NAME" >}} on Kubernetes using Helm, run the following commands in a terminal window:
1. Add the Grafana Helm chart repository:
@@ -43,26 +43,26 @@ To deploy Grafana Agent on Kubernetes using Helm, run the following commands in
helm repo update
```
-1. Install Grafana Agent:
+1. Install {{< param "PRODUCT_ROOT_NAME" >}}:
```shell
helm install RELEASE_NAME grafana/grafana-agent
```
- Replace `RELEASE_NAME` with a name to use for your Grafana Agent
+ Replace `RELEASE_NAME` with a name to use for your {{< param "PRODUCT_ROOT_NAME" >}}
installation, such as `grafana-agent-flow`.
-For more information on the Grafana Agent Helm chart, refer to the Helm chart documentation on [Artifact Hub][].
+For more information on the {{< param "PRODUCT_ROOT_NAME" >}} Helm chart, refer to the Helm chart documentation on [Artifact Hub][].
[Artifact Hub]: https://artifacthub.io/packages/helm/grafana/grafana-agent
## Next steps
-- [Configure Grafana Agent][]
+- [Configure {{< param "PRODUCT_NAME" >}}][Configure]
[Helm]: https://helm.sh
{{% docs/reference %}}
-[Configure Grafana Agent]: "/docs/agent/ -> /docs/agent//flow/setup/configure/configure-kubernetes.md"
-[Configure Grafana Agent]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-kubernetes.md"
+[Configure]: "/docs/agent/ -> /docs/agent//flow/setup/configure/configure-kubernetes.md"
+[Configure]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-kubernetes.md"
{{% /docs/reference %}}
diff --git a/docs/sources/flow/setup/install/linux.md b/docs/sources/flow/setup/install/linux.md
index f97b8f7eac64..563abdc503ee 100644
--- a/docs/sources/flow/setup/install/linux.md
+++ b/docs/sources/flow/setup/install/linux.md
@@ -6,19 +6,19 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/linux/
- /docs/grafana-cloud/send-data/agent/flow/setup/install/linux/
canonical: https://grafana.com/docs/agent/latest/flow/setup/install/linux/
-description: Learn how to install Grafana Agent in flow mode on Linux
+description: Learn how to install Grafana Agent Flow on Linux
menuTitle: Linux
-title: Install or uninstall Grafana Agent in flow mode on Linux
+title: Install Grafana Agent Flow on Linux
weight: 300
---
-# Install or uninstall Grafana Agent in flow mode on Linux
+# Install or uninstall {{< param "PRODUCT_NAME" >}} on Linux
-You can install Grafana Agent in flow mode as a systemd service on Linux.
+You can install {{< param "PRODUCT_NAME" >}} as a systemd service on Linux.
## Install
-To install Grafana Agent in flow mode on Linux, run the following commands in a terminal window.
+To install {{< param "PRODUCT_NAME" >}} on Linux, run the following commands in a terminal window.
1. Import the GPG key and add the Grafana package repository.
@@ -59,7 +59,7 @@ sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana.
```
{{< /code >}}
-1. Install Grafana Agent.
+1. Install {{< param "PRODUCT_NAME" >}}.
{{< code >}}
```debian-ubuntu
@@ -77,15 +77,15 @@ sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana.
## Uninstall
-To uninstall Grafana Agent on Linux, run the following commands in a terminal window.
+To uninstall {{< param "PRODUCT_NAME" >}} on Linux, run the following commands in a terminal window.
-1. Stop the systemd service for Grafana Agent.
+1. Stop the systemd service for {{< param "PRODUCT_NAME" >}}.
```All-distros
sudo systemctl stop grafana-agent-flow
```
-1. Uninstall Grafana Agent.
+1. Uninstall {{< param "PRODUCT_NAME" >}}.
{{< code >}}
```debian-ubuntu
@@ -119,12 +119,12 @@ To uninstall Grafana Agent on Linux, run the following commands in a terminal wi
## Next steps
-- [Start Grafana Agent][]
-- [Configure Grafana Agent][]
+- [Start {{< param "PRODUCT_NAME" >}}][Start]
+- [Configure {{< param "PRODUCT_NAME" >}}][Configure]
{{% docs/reference %}}
-[Start Grafana Agent]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md#linux"
-[Start Grafana Agent]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md#linux"
-[Configure Grafana Agent]: "/docs/agent/ -> /docs/agent//flow/setup/configure/configure-linux.md"
-[Configure Grafana Agent]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-linux.md"
+[Start]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md#linux"
+[Start]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md#linux"
+[Configure]: "/docs/agent/ -> /docs/agent//flow/setup/configure/configure-linux.md"
+[Configure]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-linux.md"
{{% /docs/reference %}}
diff --git a/docs/sources/flow/setup/install/macos.md b/docs/sources/flow/setup/install/macos.md
index 2cdf01adda17..8b276ce7bdc0 100644
--- a/docs/sources/flow/setup/install/macos.md
+++ b/docs/sources/flow/setup/install/macos.md
@@ -6,15 +6,15 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/macos/
- /docs/grafana-cloud/send-data/agent/flow/setup/install/macos/
canonical: https://grafana.com/docs/agent/latest/flow/setup/install/macos/
-description: Learn how to install Grafana Agent in flow mode on macOS
+description: Learn how to install Grafana AgentFlow on macOS
menuTitle: macOS
-title: Install Grafana Agent in flow mode on macOS
+title: Install Grafana Agent Flow on macOS
weight: 400
---
-# Install Grafana Agent in flow mode on macOS
+# Install {{< param "PRODUCT_NAME" >}} on macOS
-You can install Grafana Agent in flow mode on macOS with Homebrew .
+You can install {{< param "PRODUCT_NAME" >}} on macOS with Homebrew .
{{% admonition type="note" %}}
The default prefix for Homebrew on Intel is `/usr/local`. The default prefix for Homebrew on Apple Silicon is `/opt/Homebrew`. To verify the default prefix for Homebrew on your computer, open a terminal window and type `brew --prefix`.
@@ -26,7 +26,7 @@ The default prefix for Homebrew on Intel is `/usr/local`. The default prefix for
## Install
-To install Grafana Agent on macOS, run the following commands in a terminal window.
+To install {{< param "PRODUCT_NAME" >}} on macOS, run the following commands in a terminal window.
1. Add the Grafana Homebrew tap:
@@ -34,7 +34,7 @@ To install Grafana Agent on macOS, run the following commands in a terminal wind
brew tap grafana/grafana
```
-1. Install Grafana Agent:
+1. Install {{< param "PRODUCT_NAME" >}}:
```shell
brew install grafana-agent-flow
@@ -42,15 +42,15 @@ To install Grafana Agent on macOS, run the following commands in a terminal wind
## Upgrade
-To upgrade Grafana Agent on macOS, run the following commands in a terminal window.
+To upgrade {{< param "PRODUCT_NAME" >}} on macOS, run the following commands in a terminal window.
-1. Upgrade Grafana Agent:
+1. Upgrade {{< param "PRODUCT_NAME" >}}:
```shell
brew upgrade grafana-agent-flow
```
-1. Restart Grafana Agent:
+1. Restart {{< param "PRODUCT_NAME" >}}:
```shell
brew services restart grafana-agent-flow
@@ -58,7 +58,7 @@ To upgrade Grafana Agent on macOS, run the following commands in a terminal wind
## Uninstall
-To uninstall Grafana Agent on macOS, run the following command in a terminal window:
+To uninstall {{< param "PRODUCT_NAME" >}} on macOS, run the following command in a terminal window:
```shell
brew uninstall grafana-agent-flow
@@ -66,14 +66,14 @@ brew uninstall grafana-agent-flow
## Next steps
-- [Start Grafana Agent][]
-- [Configure Grafana Agent][]
+- [Start {{< param "PRODUCT_NAME" >}}][Start]
+- [Configure {{< param "PRODUCT_NAME" >}}][Configure]
[Homebrew]: https://brew.sh
{{% docs/reference %}}
-[Start Grafana Agent]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md#macos"
-[Start Grafana Agent]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md#macos"
-[Configure Grafana Agent]: "/docs/agent/ -> /docs/agent//flow/setup/configure/configure-macos.md"
-[Configure Grafana Agent]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-macos.md"
+[Start]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md#macos"
+[Start]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md#macos"
+[Configure]: "/docs/agent/ -> /docs/agent//flow/setup/configure/configure-macos.md"
+[Configure]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-macos.md"
{{% /docs/reference %}}
diff --git a/docs/sources/flow/setup/install/windows.md b/docs/sources/flow/setup/install/windows.md
index f9106e5f936f..5d1aadafdb1b 100644
--- a/docs/sources/flow/setup/install/windows.md
+++ b/docs/sources/flow/setup/install/windows.md
@@ -6,19 +6,19 @@ aliases:
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/windows/
- /docs/grafana-cloud/send-data/agent/flow/setup/install/windows/
canonical: https://grafana.com/docs/agent/latest/flow/setup/install/windows/
-description: Learn how to install Grafana Agent in flow mode on Windows
+description: Learn how to install Grafana Agent Flow on Windows
menuTitle: Windows
-title: Install Grafana Agent in flow mode on Windows
+title: Install Grafana Agent Flow on Windows
weight: 500
---
-# Install Grafana Agent in flow mode on Windows
+# Install {{< param "PRODUCT_NAME" >}} on Windows
-You can install Grafana Agent in flow mode on Windows as a standard graphical install, or as a silent install.
+You can install {{< param "PRODUCT_NAME" >}} on Windows as a standard graphical install, or as a silent install.
## Standard graphical install
-To do a standard graphical install of Grafana Agent on Windows, perform the following steps.
+To do a standard graphical install of {{< param "PRODUCT_NAME" >}} on Windows, perform the following steps.
1. Navigate to the [latest release][latest] on GitHub.
@@ -28,13 +28,13 @@ To do a standard graphical install of Grafana Agent on Windows, perform the foll
1. Unzip the downloaded file.
-1. Double-click on `grafana-agent-installer.exe` to install Grafana Agent.
+1. Double-click on `grafana-agent-installer.exe` to install {{< param "PRODUCT_NAME" >}}.
-Grafana Agent is installed into the default directory `C:\Program Files\Grafana Agent Flow`.
+{{< param "PRODUCT_NAME" >}} is installed into the default directory `C:\Program Files\Grafana Agent Flow`.
## Silent install
-To do a silent install of Grafana Agent on Windows, perform the following steps.
+To do a silent install of {{< param "PRODUCT_NAME" >}} on Windows, perform the following steps.
1. Navigate to the [latest release][latest] on GitHub.
@@ -61,29 +61,31 @@ To do a silent install of Grafana Agent on Windows, perform the following steps.
## Service Configuration
-Grafana Agent uses the Windows Registry `HKLM\Software\Grafana\Grafana Agent Flow` for service configuration.
+{{< param "PRODUCT_NAME" >}} uses the Windows Registry `HKLM\Software\Grafana\Grafana Agent Flow` for service configuration.
* `Arguments` (Type `REG_MULTI_SZ`) Each value represents a binary argument for grafana-agent-flow binary.
* `Environment` (Type `REG_MULTI_SZ`) Each value represents a environment value `KEY=VALUE` for grafana-agent-flow binary.
## Uninstall
-You can uninstall Grafana Agent with Windows Remove Programs or `C:\Program Files\Grafana Agent\uninstaller.exe`. Uninstalling Grafana Agent stops the service and removes it from disk. This includes any configuration files in the installation directory.
+You can uninstall {{< param "PRODUCT_NAME" >}} with Windows Remove Programs or `C:\Program Files\Grafana Agent\uninstaller.exe`.
+Uninstalling {{< param "PRODUCT_NAME" >}} stops the service and removes it from disk.
+This includes any configuration files in the installation directory.
-Grafana Agent can also be silently uninstalled by running `uninstall.exe /S` as Administrator.
+{{< param "PRODUCT_NAME" >}} can also be silently uninstalled by running `uninstall.exe /S` as Administrator.
## Next steps
-- [Start Grafana Agent][]
-- [Configure Grafana Agent][]
+- [Start {{< param "PRODUCT_NAME" >}}][Start]
+- [Configure {{< param "PRODUCT_NAME" >}}][Configure]
[latest]: https://github.com/grafana/agent/releases/latest
{{% docs/reference %}}
-[Start Grafana Agent]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md#windows"
-[Start Grafana Agent]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md#windows"
-[Configure Grafana Agent]: "/docs/agent/ -> /docs/agent//flow/setup/configure/configure-windows.md"
-[Configure Grafana Agent]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-windows.md"
+[Start]: "/docs/agent/ -> /docs/agent//flow/setup/start-agent.md#windows"
+[Start]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/start-agent.md#windows"
+[Configure]: "/docs/agent/ -> /docs/agent//flow/setup/configure/configure-windows.md"
+[Configure]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-windows.md"
[data collection]: "/docs/agent/ -> /docs/agent//data-collection.md"
[data collection]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/data-collection.md"
{{% /docs/reference %}}
diff --git a/docs/sources/flow/setup/start-agent.md b/docs/sources/flow/setup/start-agent.md
index 14d406461aef..a9ddb0a82e69 100644
--- a/docs/sources/flow/setup/start-agent.md
+++ b/docs/sources/flow/setup/start-agent.md
@@ -6,24 +6,24 @@ aliases:
- /docs/grafana-cloud/send-data/agent/flow/setup/start-agent/
canonical: https://grafana.com/docs/agent/latest/flow/setup/start-agent/
description: Learn how to start, restart, and stop Grafana Agent after it is installed
-menuTitle: Start flow mode
-title: Start, restart, and stop Grafana Agent in flow mode
+menuTitle: Start Grafana Agent Flow
+title: Start, restart, and stop Grafana Agent Flow
weight: 800
---
-# Start, restart, and stop Grafana Agent in flow mode
+# Start, restart, and stop {{< param "PRODUCT_NAME" >}}
-You can start, restart, and stop Grafana Agent after it is installed.
+You can start, restart, and stop {{< param "PRODUCT_NAME" >}} after it is installed.
## Linux
-Grafana Agent is installed as a [systemd][] service on Linux.
+{{< param "PRODUCT_NAME" >}} is installed as a [systemd][] service on Linux.
[systemd]: https://systemd.io/
-### Start Grafana Agent
+### Start {{< param "PRODUCT_NAME" >}}
-To start Grafana Agent, run the following command in a terminal window:
+To start {{< param "PRODUCT_NAME" >}}, run the following command in a terminal window:
```shell
sudo systemctl start grafana-agent-flow
@@ -35,33 +35,33 @@ sudo systemctl start grafana-agent-flow
sudo systemctl status grafana-agent-flow
```
-### Configure Grafana Agent to start at boot
+### Configure {{< param "PRODUCT_NAME" >}} to start at boot
-To automatically run Grafana Agent when the system starts, run the following command in a terminal window:
+To automatically run {{< param "PRODUCT_NAME" >}} when the system starts, run the following command in a terminal window:
```shell
sudo systemctl enable grafana-agent-flow.service
```
-### Restart Grafana Agent
+### Restart {{< param "PRODUCT_NAME" >}}
-To restart Grafana Agent, run the following command in a terminal window:
+To restart {{< param "PRODUCT_NAME" >}}, run the following command in a terminal window:
```shell
sudo systemctl restart grafana-agent-flow
```
-### Stop Grafana Agent
+### Stop {{< param "PRODUCT_NAME" >}}
-To stop Grafana Agent, run the following command in a terminal window:
+To stop {{< param "PRODUCT_NAME" >}}, run the following command in a terminal window:
```shell
sudo systemctl stop grafana-agent-flow
```
-### View Grafana Agent logs on Linux
+### View {{< param "PRODUCT_NAME" >}} logs on Linux
-To view the Grafana Agent log files, run the following command in a terminal window:
+To view {{< param "PRODUCT_NAME" >}} log files, run the following command in a terminal window:
```shell
sudo journalctl -u grafana-agent-flow
@@ -69,17 +69,17 @@ sudo journalctl -u grafana-agent-flow
## macOS
-Grafana Agent is installed as a launchd service on macOS.
+{{< param "PRODUCT_NAME" >}} is installed as a launchd service on macOS.
-### Start Grafana Agent
+### Start {{< param "PRODUCT_NAME" >}}
-To start Grafana Agent, run the following command in a terminal window:
+To start {{< param "PRODUCT_NAME" >}}, run the following command in a terminal window:
```shell
brew services start grafana-agent-flow
```
-Grafana Agent automatically runs when the system starts.
+{{< param "PRODUCT_NAME" >}} automatically runs when the system starts.
(Optional) To verify that the service is running, run the following command in a terminal window:
@@ -87,35 +87,35 @@ Grafana Agent automatically runs when the system starts.
brew services info grafana-agent-flow
```
-### Restart Grafana Agent
+### Restart {{< param "PRODUCT_NAME" >}}
-To restart Grafana Agent, run the following command in a terminal window:
+To restart {{< param "PRODUCT_NAME" >}}, run the following command in a terminal window:
```shell
brew services restart grafana-agent-flow
```
-### Stop Grafana Agent
+### Stop {{< param "PRODUCT_NAME" >}}
-To stop Grafana Agent, run the following command in a terminal window:
+To stop {{< param "PRODUCT_NAME" >}}, run the following command in a terminal window:
```shell
brew services stop grafana-agent-flow
```
-### View Grafana Agent logs on macOS
+### View {{< param "PRODUCT_NAME" >}} logs on macOS
By default, logs are written to `$(brew --prefix)/var/log/grafana-agent-flow.log` and
`$(brew --prefix)/var/log/grafana-agent-flow.err.log`.
-If you followed [Configure the Grafana Agent service][] and changed the path where logs are written,
-refer to your current copy of the Grafana Agent formula to locate your log files.
+If you followed [Configure the {{< param "PRODUCT_NAME" >}} service][Configure] and changed the path where logs are written,
+refer to your current copy of the {{< param "PRODUCT_NAME" >}} formula to locate your log files.
## Windows
-Grafana Agent is installed as a Windows Service. The service is configured to automatically run on startup.
+{{< param "PRODUCT_NAME" >}} is installed as a Windows Service. The service is configured to automatically run on startup.
-To verify that Grafana Agent is running as a Windows Service:
+To verify that {{< param "PRODUCT_NAME" >}} is running as a Windows Service:
1. Open the Windows Services manager (services.msc):
@@ -123,12 +123,12 @@ To verify that Grafana Agent is running as a Windows Service:
1. Type: `services.msc` and click **OK**.
-1. Scroll down to find the **Grafana Agent Flow** service and verify that the **Status** is **Running**.
+1. Scroll down to find the **{{< param "PRODUCT_NAME" >}}** service and verify that the **Status** is **Running**.
-### View Grafana Agent logs
+### View {{< param "PRODUCT_NAME" >}} logs
-When running on Windows, Grafana Agent writes its logs to Windows Event
-Logs with an event source name of **Grafana Agent Flow**.
+When running on Windows, {{< param "PRODUCT_NAME" >}} writes its logs to Windows Event
+Logs with an event source name of **{{< param "PRODUCT_NAME" >}}**.
To view the logs, perform the following steps:
@@ -140,15 +140,15 @@ To view the logs, perform the following steps:
1. In the Event Viewer, click on **Windows Logs > Application**.
-1. Search for events with the source **Grafana Agent Flow**.
+1. Search for events with the source **{{< param "PRODUCT_NAME" >}}**.
## Standalone binary
-If you downloaded the standalone binary, you must run the agent from a terminal or command window.
+If you downloaded the standalone binary, you must run {{< param "PRODUCT_NAME" >}} from a terminal or command window.
-### Start Grafana Agent on Linux, macOS, or FreeBSD
+### Start {{< param "PRODUCT_NAME" >}} on Linux, macOS, or FreeBSD
-To start Grafana Agent on Linux, macOS, or FreeBSD, run the following command in a terminal window:
+To start {{< param "PRODUCT_NAME" >}} on Linux, macOS, or FreeBSD, run the following command in a terminal window:
```shell
AGENT_MODE=flow BINARY_PATH run CONFIG_PATH
@@ -156,12 +156,12 @@ AGENT_MODE=flow BINARY_PATH run CONFIG_PATH
Replace the following:
-* `BINARY_PATH`: The path to the Grafana Agent binary file.
-* `CONFIG_PATH`: The path to the Grafana Agent configuration file.
+* `BINARY_PATH`: The path to the {{< param "PRODUCT_NAME" >}} binary file.
+* `CONFIG_PATH`: The path to the {{< param "PRODUCT_NAME" >}} configuration file.
-### Start Grafana Agent on Windows
+### Start {{< param "PRODUCT_NAME" >}} on Windows
-To start Grafana Agent on Windows, run the following commands in a command prompt:
+To start {{< param "PRODUCT_NAME" >}} on Windows, run the following commands in a command prompt:
```cmd
set AGENT_MODE=flow
@@ -170,15 +170,15 @@ BINARY_PATH run CONFIG_PATH
Replace the following:
-* `BINARY_PATH`: The path to the Grafana Agent binary file.
-* `CONFIG_PATH`: The path to the Grafana Agent configuration file.
+* `BINARY_PATH`: The path to the {{< param "PRODUCT_NAME" >}} binary file.
+* `CONFIG_PATH`: The path to the {{< param "PRODUCT_NAME" >}} configuration file.
-### Set up Grafana Agent as a Linux systemd service
+### Set up {{< param "PRODUCT_NAME" >}} as a Linux systemd service
-You can set up and manage the standalone binary for Grafana Agent as a Linux systemd service.
+You can set up and manage the standalone binary for {{< param "PRODUCT_NAME" >}} as a Linux systemd service.
{{% admonition type="note" %}}
-These steps assume you have a default systemd and Grafana Agent configuration.
+These steps assume you have a default systemd and {{< param "PRODUCT_NAME" >}} configuration.
{{% /admonition %}}
1. To create a new user called `grafana-agent-flow` run the following command in a terminal window:
@@ -213,7 +213,7 @@ These steps assume you have a default systemd and Grafana Agent configuration.
Replace the following:
- * `BINARY_PATH`: The path to the Grafana Agent binary file.
+ * `BINARY_PATH`: The path to the {{< param "PRODUCT_NAME" >}} binary file.
* `WORKING_DIRECTORY`: The path to a working directory, for example `/var/lib/grafana-agent-flow`.
1. Create an environment file in `/etc/default/` called `grafana-agent-flow` with the following contents:
@@ -227,7 +227,7 @@ These steps assume you have a default systemd and Grafana Agent configuration.
#
# Command line options for grafana-agent
#
- # The configuration file holding the agent config.
+ # The configuration file holding the Grafana Agent Flow configuration.
CONFIG_FILE="CONFIG_PATH"
# User-defined arguments to pass to the run command.
@@ -239,7 +239,7 @@ These steps assume you have a default systemd and Grafana Agent configuration.
Replace the following:
- * `CONFIG_PATH`: The path to the Grafana Agent configuration file.
+ * `CONFIG_PATH`: The path to the {{< param "PRODUCT_NAME" >}} configuration file.
1. To reload the service files, run the following command in a terminal window:
@@ -247,11 +247,11 @@ These steps assume you have a default systemd and Grafana Agent configuration.
sudo systemctl daemon-reload
```
-1. Use the [Linux](#linux) systemd commands to manage your standalone Linux installation of Grafana Agent.
+1. Use the [Linux](#linux) systemd commands to manage your standalone Linux installation of {{< param "PRODUCT_NAME" >}}.
[release]: https://github.com/grafana/agent/releases/latest
{{% docs/reference %}}
-[Configure the Grafana Agent service]: "/docs/agent/ -> /docs/agent//flow/setup/configure/configure-macos.md#configure-the-grafana-agent-service"
-[Configure the Grafana Agent service]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-macos.md#configure-the-grafana-agent-service"
+[Configure]: "/docs/agent/ -> /docs/agent//flow/setup/configure/configure-macos.md#configure-the-grafana-agent-service"
+[Configure]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-macos.md#configure-the-grafana-agent-service"
{{% /docs/reference %}}
diff --git a/docs/sources/flow/tutorials/_index.md b/docs/sources/flow/tutorials/_index.md
index 0d6149f11721..d695d7fb1374 100644
--- a/docs/sources/flow/tutorials/_index.md
+++ b/docs/sources/flow/tutorials/_index.md
@@ -12,6 +12,6 @@ weight: 300
# Tutorials
-This section provides tutorials for learning how to use Grafana Agent Flow.
+This section provides tutorials for learning how to use {{< param "PRODUCT_NAME" >}}.
{{< section >}}
diff --git a/docs/sources/flow/tutorials/chaining.md b/docs/sources/flow/tutorials/chaining.md
index 3effc0409fa3..9be20dbc3ade 100644
--- a/docs/sources/flow/tutorials/chaining.md
+++ b/docs/sources/flow/tutorials/chaining.md
@@ -32,11 +32,11 @@ curl https://raw.githubusercontent.com/grafana/agent/main/docs/sources/flow/tuto
The `runt.sh` script does:
-1. Downloads the configurations necessary for Mimir, Grafana, and Grafana Agent.
-2. Downloads the docker image for Grafana Agent explicitly.
-3. Runs the docker-compose up command to bring all the services up.
+1. Downloads the configurations necessary for Mimir, Grafana, and {{< param "PRODUCT_ROOT_NAME" >}}.
+2. Downloads the docker image for {{< param "PRODUCT_ROOT_NAME" >}} explicitly.
+3. Runs the `docker-compose up` command to bring all the services up.
-Allow Grafana Agent to run for two minutes, then navigate to [Grafana][] to see the Agent scrape metrics. The [node_exporter][] metrics also show up now.
+Allow {{< param "PRODUCT_ROOT_NAME" >}} to run for two minutes, then navigate to [Grafana][] to see {{< param "PRODUCT_ROOT_NAME" >}} scrape metrics. The [node_exporter][] metrics also show up now.
There are two scrapes each sending metrics to one filter. Note the `job` label lists the full name of the scrape component.
diff --git a/docs/sources/flow/tutorials/collecting-prometheus-metrics.md b/docs/sources/flow/tutorials/collecting-prometheus-metrics.md
index a12cb625b17a..ecaf186e4a3d 100644
--- a/docs/sources/flow/tutorials/collecting-prometheus-metrics.md
+++ b/docs/sources/flow/tutorials/collecting-prometheus-metrics.md
@@ -14,7 +14,7 @@ weight: 200
# Collect Prometheus metrics
-Grafana Agent is a telemetry collector with the primary goal of moving telemetry data from one location to another. In this tutorial, you'll set up a Grafana Agent in Flow mode.
+{{< param "PRODUCT_ROOT_NAME" >}} is a telemetry collector with the primary goal of moving telemetry data from one location to another. In this tutorial, you'll set up {{< param "PRODUCT_NAME" >}}.
## Prerequisites
@@ -30,21 +30,21 @@ curl https://raw.githubusercontent.com/grafana/agent/main/docs/sources/flow/tuto
The `runt.sh` script does:
-1. Downloads the configurations necessary for Mimir, Grafana, and Grafana Agent.
-2. Downloads the docker image for Grafana Agent explicitly.
+1. Downloads the configurations necessary for Mimir, Grafana, and {{< param "PRODUCT_ROOT_NAME" >}}.
+2. Downloads the docker image for {{< param "PRODUCT_ROOT_NAME" >}} explicitly.
3. Runs the docker-compose up command to bring all the services up.
-Allow Grafana Agent to run for two minutes, then navigate to [Grafana][].
+Allow {{< param "PRODUCT_ROOT_NAME" >}} to run for two minutes, then navigate to [Grafana][].
![Dashboard showing agent_build_info metrics](/media/docs/agent/screenshot-grafana-agent-collect-metrics-build-info.png)
-This example scrapes the Grafana Agent's `http://localhost:12345/metrics` endpoint and pushes those metrics to the Mimir instance.
+This example scrapes the {{< param "PRODUCT_NAME" >}} `http://localhost:12345/metrics` endpoint and pushes those metrics to the Mimir instance.
-Navigate to `http://localhost:12345/graph` to view the Grafana Agent Flow UI.
+Navigate to `http://localhost:12345/graph` to view the {{< param "PRODUCT_NAME" >}} UI.
-![The Grafana Agent UI](/media/docs/agent/screenshot-grafana-agent-collect-metrics-graph.png)
+![The User Interface](/media/docs/agent/screenshot-grafana-agent-collect-metrics-graph.png)
-The Agent displays the component pipeline in a dependency graph. See [Scraping component](#scraping-component) and [Remote Write component](#remote-write-component) for details about the components used in this configuration.
+{{< param "PRODUCT_ROOT_NAME" >}} displays the component pipeline in a dependency graph. See [Scraping component](#scraping-component) and [Remote Write component](#remote-write-component) for details about the components used in this configuration.
Click the nodes to navigate to the associated component page. There, you can view the state, health information, and, if applicable, the debug information.
![Component information](/media/docs/agent/screenshot-grafana-agent-collect-metrics-comp-info.png)
@@ -69,7 +69,7 @@ prometheus.scrape "default" {
The `prometheus.scrape "default"` annotation indicates the name of the component, `prometheus.scrape`, and its label, `default`. All components must have a unique combination of name and if applicable label.
-The `targets` [attribute][] is an [argument][]. `targets` is a list of labels that specify the target via the special key `__address__`. The scraper is targeting the Agent's `/metrics` endpoint. Both `http` and `/metrics` are implied but can be overridden.
+The `targets` [attribute][] is an [argument][]. `targets` is a list of labels that specify the target via the special key `__address__`. The scraper is targeting the {{< param "PRODUCT_NAME" >}} `/metrics` endpoint. Both `http` and `/metrics` are implied but can be overridden.
The `forward_to` attribute is an argument that references the [export][] of the `prometheus.remote_write.prom` component. This is where the scraper will send the metrics for further processing.
@@ -87,10 +87,10 @@ prometheus.remote_write "prom" {
## Running without Docker
-To try out the Grafana Agent without using Docker:
-1. Download the Grafana Agent.
+To try out {{< param "PRODUCT_ROOT_NAME" >}} without using Docker:
+1. Download {{< param "PRODUCT_ROOT_NAME" >}}.
1. Set the environment variable `AGENT_MODE=flow`.
-1. Run the agent with `grafana-agent run `.
+1. Run the {{< param "PRODUCT_ROOT_NAME" >}} with `grafana-agent run `.
[Docker]: https://www.docker.com/products/docker-desktop
diff --git a/docs/sources/flow/tutorials/filtering-metrics.md b/docs/sources/flow/tutorials/filtering-metrics.md
index 9314f0bebf15..ec942124ec91 100644
--- a/docs/sources/flow/tutorials/filtering-metrics.md
+++ b/docs/sources/flow/tutorials/filtering-metrics.md
@@ -32,12 +32,12 @@ curl https://raw.githubusercontent.com/grafana/agent/main/docs/sources/flow/tuto
The `runt.sh` script does:
-1. Downloads the configurations necessary for Mimir, Grafana and Grafana Agent.
-1. Downloads the docker image for Grafana Agent explicitly.
+1. Downloads the configurations necessary for Mimir, Grafana and {{< param "PRODUCT_ROOT_NAME" >}}.
+1. Downloads the docker image for {{< param "PRODUCT_ROOT_NAME" >}} explicitly.
1. Runs the docker-compose up command to bring all the services up.
-Allow Grafana Agent to run for two minutes, then navigate to [Grafana][] page and the `service` label will be there with the `api_server` value.
+Allow {{< param "PRODUCT_ROOT_NAME" >}} to run for two minutes, then navigate to [Grafana][] page and the `service` label will be there with the `api_server` value.
![Dashboard showing api_server](/media/docs/agent/screenshot-grafana-agent-filtering-metrics-filter.png)
@@ -49,7 +49,7 @@ Allow Grafana Agent to run for two minutes, then navigate to [Grafana][] page an
## Update the service value
-Open the `relabel.river` file that was downloaded and change the name of the service to `api_server_v2`, then run `bash ./runt.sh relabel.river`. Allow Grafana Agent to run for two minutes, then navigate to the [Grafana][] page, and the new label will be updated. The old value `api_server` may still show up in the graph but hovering over the lines will show that that value stopped being scraped and was replaced with `api_server_v2`.
+Open the `relabel.river` file that was downloaded and change the name of the service to `api_server_v2`, then run `bash ./runt.sh relabel.river`. Allow {{< param "PRODUCT_ROOT_NAME" >}} to run for two minutes, then navigate to the [Grafana][] page, and the new label will be updated. The old value `api_server` may still show up in the graph but hovering over the lines will show that that value stopped being scraped and was replaced with `api_server_v2`.
![Updated dashboard showing api_server_v2](/media/docs/agent/screenshot-grafana-agent-filtering-metrics-transition.png)
diff --git a/docs/sources/shared/wal-data-retention.md b/docs/sources/shared/wal-data-retention.md
index 5be67691ec59..1d2caf844e17 100644
--- a/docs/sources/shared/wal-data-retention.md
+++ b/docs/sources/shared/wal-data-retention.md
@@ -80,7 +80,7 @@ before being pushed to the remote_write endpoint.
WAL corruption can occur when Grafana Agent unexpectedly stops while the latest WAL segments
are still being written to disk. For example, the host computer has a general disk failure
and crashes before you can stop Grafana Agent and other running services. When you restart Grafana
-Agent, the Agent verifies the WAL, removing any corrupt segments it finds. Sometimes, this repair
+Agent, it verifies the WAL, removing any corrupt segments it finds. Sometimes, this repair
is unsuccessful, and you must manually delete the corrupted WAL to continue.
If the WAL becomes corrupted, Grafana Agent writes error messages such as