-
Notifications
You must be signed in to change notification settings - Fork 5
InfluxDB
Supported versions: InfluxDB 1.0+
Influx integration requires write access to store state of pipelines (last processed timestamp). It can be stored in the same DB as source data or in a separate database installed, for example alongside the agent.
Data is written to measurement name agent_timestamps
with tag pipeline_id
and a field last_timestamp
. Example: agent_timestamps,pipeline_id=<pipeline_id> last_timestamp=<timestamp>
Property | Type | Description |
---|---|---|
type |
String | Specify source type. Value - influx
|
name |
String | Unique source name - also the config filename |
config |
Object | Source configuration |
All properties are required
Property | Type | Required | Description |
---|---|---|---|
host |
String | yes | URL to influx API, e.g. "http://influx:8086" |
db |
String | yes | Influx database name |
username |
String | no | Influx username |
password |
String | no | Password |
offset |
String | no | Specify the date in the format "dd/MM/yyyy HH:mm" from which to pull data from or amount of days (integer) ago. If the string is empty, data is pulled from the beginning. |
All properties are required
[{
"type": "influx",
"name": "influx_source",
"config": {
"host": "http://influx:8086",
"db": "test",
"username": "user",
"password": "password",
"offset": 90
}
}]
- Pipeline ID - unique pipeline identifier (use a human-readable name so you could easily use it further)
-
Measurement name - metric name in InfluxDB from which to make a query. Also, this is added as dimension
measurement_category
to result - Query - custom query to InfluxDB. Column name from which SELECT query performed should be added into Dimensions/Values configuration fields. Please always add {TIMESTAMP_CONDITION} constant to the where clause (see Advanced with Query example)
-
Value - enter column names, separated with spaces or
*
to select all fields -
Target type - represents how samples of the same metric are aggregated in Anodot. Valid values are:
gauge
(average aggregation),counter
(sum aggregation) - Dimensions. (NOTE: Values for dimensions should be stored as tags in Influx DB.)
- Basic
- Dimensions - Names of columns delimited with spaces. These fields may be missing in a record
- Advanced
- Required dimensions - Names of columns delimited with spaces. If these fields are missing in a record, it goes to error stage
- Optional dimensions - Names of columns delimited with spaces. These fields may be missing in a record
- Basic
- (Advanced) Static dimensions - dimensions with static values to pass to Anodot. Format - key1:value1 key2:value2 key3:value3
- (Advanced) Tags - tags. Format - key1:value1 key2:value2 key3:value3
- Delay - how much time to wait until data has arrived, default 0. Format: Number + unit, example: 10s, 15m, 1h
- Interval - How often to make a query, integer, seconds. Default - 60 seconds
- (Advanced) Filtering condition - condition to add to the where clause (use InfluxDB query syntax)
Pipeline forms a query SELECT {dimensions},{values} FROM {measurement} WHERE (time >= {last_timestamp} AND time < {last_timestamp}+{interval} and time < now()-{delay}) AND ({filtering_condition})
and runs it every n seconds defined in the interval config. Last processed timestamp is saved back to influxdb (measurement agent_timestamps
, same db used in the pipeline)
Illegal characters .
, <
and space are replaced with underscores _
Property | Required | Property name in config file | Value type in config file | Description |
---|---|---|---|---|
Source | yes | source | String | Source config name |
Pipeline ID | yes | pipeline_id |
String | Unique pipeline identifier (use a human-readable name so you could easily use it further) |
Measurement name | yes | measurement_name |
String | metric name in InfluxDB from which to make a query. Also, this is added as dimension measurement_category to result. Required if query is not provided. |
Query | yes | query |
String | Query. Required if measurement_name is not provided. Please always add {TIMESTAMP_CONDITION} constant to the where clause so the agent can make queries based on the timestamp |
Value | yes | values |
List of strings | Value column names, columns may be only numeric. If you want to select all fields, use ["*"]
|
Values units | no | units |
Object | Key-value pairs (value:unit ). The value must be from the values column, units can be any. |
Target type | no | target_type |
String | represents how samples of the same metric are aggregated in Anodot. Valid values are: gauge (average aggregation), counter (sum aggregation) |
Dimensions | yes | dimensions |
List of strings | Names of columns that will be used as dimensions, columns may only be strings. These fields may be missing in a record |
Static dimensions | no | properties |
Object with key-value pairs | Dimensions with static values to pass to Anodot as dimensions. |
Tags | no | tags |
Object with key-value pairs | Tags |
Delay | no | delay |
String | how much time to wait until data has arrived, default 0. Format: Number + unit, example: 10s, 15m, 1h |
Interval | no | interval |
Integer | How often to make a query, integer, seconds. Default - 60 seconds |
Filtering condition | no | filtering |
String | condition to add to the where clause (use InfluxDB query syntax) |
Transformation | no | transform |
String | See transformations page |
Notifications | No | notifications |
object | See notifications page |
Advanced configs
- Dimensions - You can specify required and optional dimensions. If a required column is missing in the record, it goes to the error stage. If an optional dimension is missing - the record goes to further processing
{
"dimensions": {
"required": ["cpu", "host"],
"optional": ["zone"]
}
}
Simple:
[{
"source": "test_influx",
"pipeline_id": "test_influx_file_short",
"measurement_name": "cpu_test",
"values": {"usage_active": "gauge", "usage_idle": "gauge"},
"dimensions": ["cpu", "host", "zone"]
}]
Advanced:
[
{
"source": "test_influx",
"pipeline_id": "test_influx_file_full",
"measurement_name": "cpu_test",
"values": {"usage_active": "gauge", "usage_idle": "gauge"},
"units": {"usage_active": "sec", "usage_idle": "sec"},
"dimensions": {
"required": ["cpu", "host"],
"optional": ["zone"]
},
"target_type": "gauge",
"properties": {"key1": "value1", "key2": "value2", "key3": "value3"},
"tags": {"key1": ["value1"], "key2": ["value2"], "key3": ["value3"]},
"delay": "10m",
"interval": "300",
"filtering": "zone = 'GF'"
}
]
Advanced with Query:
[
{
"source": "test_influx",
"pipeline_id": "test_influx_file_full",
"query": "SELECT cpu_value,hdd_value FROM cpu_test,hdd_test WHERE {TIMESTAMP_CONDITION} AND 1=1",
"values": {"cpu_value": "counter", "hdd_value": "counter"},
"units": {"cpu_value": "%", "hdd_value": "GB"},
"dimensions": {
"required": [],
"optional": ["zone", "hostname"]
},
"target_type": "gauge",
"properties": {"key1": "value1", "key2": "value2", "key3": "value3"},
"tags": {"key1": ["value1"], "key2": ["value2"], "key3": ["value3"]},
"delay": "10m",
"interval": "300",
"filtering": "zone = 'GF'"
}
]
- Home
- CLI reference
- API
- Kubernetes setup using Helm
- Podman setup
- Creating pipelines
- Test sources
- Data formats (JSON, CSV, AVRO, LOG)
- How to parse logs with grok patterns
- How to store sensitive information
- Automated pipelines creation
- Filtering
- Transformation files
- Fields
- DVP Configuration
- Integrations
- Sending events to Anodot