Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: fix old instances of prometheus.exporter.unix not needing a label #5543

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 0 additions & 4 deletions docs/sources/flow/concepts/configuration_language.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,10 +73,6 @@ The most common expression is to reference the exports of a component like
formed by merging the component's name (e.g., `local.file`), label (e.g.,
`password_file`), and export name (e.g., `content`), delimited by period.

For components that don't use labels, like
`prometheus.exporter.unix`, only combine the component name with
export name: `prometheus.exporter.unix.targets`.
Comment on lines -76 to -78
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no longer any component which doesn't use a label; all of them require specifying a label.


## Blocks

_Blocks_ are used to configure components and groups of attributes. Each block
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/flow/config-language/components.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ re-evaluating their arguments and providing their exports.
## Configuring components
Components are created by defining a top-level River block. All components
are identified by their name, describing what the component is responsible for,
while some allow or require to provide an extra user-specified _label_.
and a user-specified _label_.

The [components docs]({{< relref "../reference/components/_index.md" >}}) contain a list
of all available components. Each one has a complete reference page, so getting
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/flow/reference/components/module.file.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,10 +131,10 @@ module.file "metrics" {
}
}

prometheus.exporter.unix { }
prometheus.exporter.unix "default" { }

prometheus.scrape "local_agent" {
targets = prometheus.exporter.unix.targets
targets = prometheus.exporter.unix.default.targets
forward_to = [module.file.metrics.exports.prometheus_remote_write.receiver]
scrape_interval = "10s"
}
Expand Down
8 changes: 4 additions & 4 deletions docs/sources/flow/reference/components/module.http.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,9 +116,9 @@ unhealthy, and the health includes the error from loading the module.
## Example

In this example, the `module.http` component loads a module from a locally running
HTTP server, polling for changes once every minute.
HTTP server, polling for changes once every minute.

The module sets up a Redis exporter and exports the list of targets to the parent config to scrape
The module sets up a Redis exporter and exports the list of targets to the parent config to scrape
and remote write.


Expand All @@ -130,10 +130,10 @@ module.http "remote_module" {
poll_frequency = "1m"
}

prometheus.exporter.unix { }
prometheus.exporter.unix "default" { }

prometheus.scrape "local_agent" {
targets = concat(prometheus.exporter.unix.targets, module.http.remote_module.exports.targets)
targets = concat(prometheus.exporter.unix.default.targets, module.http.remote_module.exports.targets)
forward_to = [module.http.metrics.exports.prometheus_remote_write.receiver]
scrape_interval = "10s"
}
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/flow/reference/components/module.string.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,10 +129,10 @@ module.string "metrics" {
}
}

prometheus.exporter.unix { }
prometheus.exporter.unix "default" { }

prometheus.scrape "local_agent" {
targets = prometheus.exporter.unix.targets
targets = prometheus.exporter.unix.default.targets
forward_to = [module.string.metrics.exports.prometheus_remote_write.receiver]
scrape_interval = "10s"
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Name | Type | Description | Default | Required
`sampling_initial` | `int` | Number of messages initially logged each second. | `2` | no
`sampling_thereafter` | `int` | Sampling rate after the initial messages are logged. | `500` | no

The `verbosity` argument must be one of `"basic"`, `"normal"`, or `"detailed"`.
The `verbosity` argument must be one of `"basic"`, `"normal"`, or `"detailed"`.

## Blocks

Expand Down Expand Up @@ -87,22 +87,22 @@ information.
This example scrapes prometheus unix metrics and writes them to the console:

```river
prometheus.exporter.unix { }
prometheus.exporter.unix "default" { }

prometheus.scrape "default" {
targets = prometheus.exporter.unix.targets
forward_to = [otelcol.receiver.prometheus.default.receiver]
targets = prometheus.exporter.unix.default.targets
forward_to = [otelcol.receiver.prometheus.default.receiver]
}

otelcol.receiver.prometheus "default" {
output {
metrics = [otelcol.exporter.logging.default.input]
}
output {
metrics = [otelcol.exporter.logging.default.input]
}
}

otelcol.exporter.logging "default" {
verbosity = "detailed"
sampling_initial = 1
sampling_thereafter = 1
verbosity = "detailed"
sampling_initial = 1
sampling_thereafter = 1
}
```
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
prometheus.exporter.unix {
set_collectors = ["cpu", "diskstats"]
prometheus.exporter.unix "default" {
set_collectors = ["cpu", "diskstats"]
}

prometheus.scrape "my_scrape_job" {
targets = prometheus.exporter.unix.targets
targets = prometheus.exporter.unix.default.targets
forward_to = [prometheus.remote_write.default.receiver]
}

prometheus.remote_write "default" {
endpoint {
url = "http://mimir:9009/api/v1/push"
}
}
}
40 changes: 20 additions & 20 deletions docs/sources/flow/tutorials/chaining.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,23 +15,23 @@ weight: 400

This tutorial shows how to use [multiple-inputs.river](/docs/agent/latest/flow/tutorials/assets/flow_configs/multiple-inputs.river) to send data to several different locations. This tutorial uses the same base as [Filtering metrics]({{< relref "./filtering-metrics" >}}).

A new concept introduced in Flow is chaining components together in a composable pipeline. This promotes the reusability of components while offering flexibility.
A new concept introduced in Flow is chaining components together in a composable pipeline. This promotes the reusability of components while offering flexibility.

## Prerequisites

* [Docker](https://www.docker.com/products/docker-desktop)

## Run the example

Run the following
Run the following

```bash
curl https://raw.githubusercontent.com/grafana/agent/main/docs/sources/flow/tutorials/assets/runt.sh -O && bash ./runt.sh multiple-inputs.river
```

The `runt.sh` script does:

1. Downloads the configs necessary for Mimir, Grafana and the Grafana Agent.
1. Downloads the configs necessary for Mimir, Grafana and the Grafana Agent.
2. Downloads the docker image for Grafana Agent explicitly.
3. Runs the docker-compose up command to bring all the services up.

Expand All @@ -43,37 +43,37 @@ There are two scrapes each sending metrics to one filter. Note the `job` label l

```river
prometheus.scrape "agent" {
targets = [{"__address__" = "localhost:12345"}]
forward_to = [prometheus.relabel.service.receiver]
targets = [{"__address__" = "localhost:12345"}]
forward_to = [prometheus.relabel.service.receiver]
}

prometheus.exporter.unix {
set_collectors = ["cpu", "diskstats"]
prometheus.exporter.unix "default" {
set_collectors = ["cpu", "diskstats"]
}

prometheus.scrape "unix" {
targets = prometheus.exporter.unix.targets
forward_to = [prometheus.relabel.service.receiver]
targets = prometheus.exporter.unix.default.targets
forward_to = [prometheus.relabel.service.receiver]
}

prometheus.relabel "service" {
rule {
source_labels = ["__name__"]
regex = "(.+)"
replacement = "api_server"
target_label = "service"
}
forward_to = [prometheus.remote_write.prom.receiver]
rule {
source_labels = ["__name__"]
regex = "(.+)"
replacement = "api_server"
target_label = "service"
}
forward_to = [prometheus.remote_write.prom.receiver]
}

prometheus.remote_write "prom" {
endpoint {
url = "http://mimir:9009/api/v1/push"
}
endpoint {
url = "http://mimir:9009/api/v1/push"
}
}
```

In the above Flow block, `prometheus.relabel.service` is being forwarded metrics from two sources `prometheus.scrape.agent` and `prometheus.exporter.unix`. This allows for a single relabel component to be used with any number of inputs.
In the above Flow block, `prometheus.relabel.service` is being forwarded metrics from two sources `prometheus.scrape.agent` and `prometheus.exporter.unix.default`. This allows for a single relabel component to be used with any number of inputs.

## Adding another relabel

Expand Down