-
Notifications
You must be signed in to change notification settings - Fork 5
InfluxDB
Supported versions: InfluxDB 1.0+
-
InfluxDB API url - ex.
http://influx:8086
orhttps://influx:8086
- Username
- Password
- Database
-
Write InfluxDB API url - ex.
http://influx:8086
orhttps://influx:8086
. Agent needs to store it's state in influx so if you have read-only access to your data, here you can specify a different db for writing - Username
- Password
- Write Database
- Initial offset - string, specify date in format "dd/MM/yyyy HH:mm" from which to pull data from or amount of days (integer) ago. By default data is pulled from current timestamp.
> agent source create
Choose source (mongo, kafka, influx): influx
Enter unique name for this source config: influx_source
InfluxDB API url: http://influx:8086
Username: user
Password: password
Database: test
Initial offset []: 19/03/2019 12:53
Source config created
> agent source create
Choose source (mongo, kafka, influx): influx
Enter unique name for this source config: influx_source
InfluxDB API url: http://influx:8086
Username: user_ro
Password: password
Database: test
Write InfluxDB API url: http://influx_for_writing:8086
Username: user
Password: password
Write Database: test
Initial offset []: 90
Source config created
Property | Type | Description |
---|---|---|
type |
String | Specify source type. Value - influx
|
name |
String | Unique source name - also the config file name |
config |
Object | Source configuration |
All properties are required
Property | Type | Required | Description |
---|---|---|---|
host |
String | yes | URL to influx API, e.g. "http://influx:8086" |
db |
String | yes | Influx database name |
username |
String | no | Influx username |
password |
String | no | Password |
write_host |
String | no | URL to influx API, e.g. "http://influx:8086" for writing agent state. If not specified it writes to a previous one |
write_db |
String | no | Influx database name |
write_username |
String | no | Influx username |
write_password |
String | no | Password |
offset |
String | no | Specify date in format "dd/MM/yyyy HH:mm" from which to pull data from or amount of days (integer) ago. If string is empty, data is pulled from the beginning. |
All properties are required
{
"type": "influx",
"name": "influx_source",
"config": {
"host": "http://influx:8086",
"db": "test",
"username": "user",
"password": "password",
"write_host": "http://influx_to_write:8086",
"write_db": "test",
"write_username": "user",
"write_password": "password",
"offset": 90
}
}
-
Pipeline ID - unique pipeline identifier (use human-readable name so you could easily use it further)
-
Measurement name - metric name in InfluxDB from which to make query. Also this is added as dimension
measurement_category
to result -
Values config
- Basic
- Value - enter column names, separated with spaces
- Advanced
- Value type - column or constant
- Value - if type column - enter column names, separated with spaces, if type constant - enter value
- Basic
-
Target type - represents how samples of the same metric are aggregated in Anodot. Valid values are:
gauge
(average aggregation),counter
(sum aggregation) -
Dimensions. (NOTE: Values for dimensions should be stored as tags in Influx DB.)
- Basic
- Dimensions - Names of columns delimited with spaces. These fields may be missing in a record
- Advanced
- Required dimensions - Names of columns delimited with spaces. If these fields are missing in a record, it goes to error stage
- Optional dimensions - Names of columns delimited with spaces. These fields may be missing in a record
- Basic
-
(Advanced) Additional properties - additional properties with static values to pass to Anodot as dimensions. Format - key1:value1 key2:value2 key3:value3
-
Delay - how many time to wait until data has arrived, default 0. Format: Number + unit, example: 10s, 15m, 1h
-
Interval - How often to make a query, integer, seconds. Default - 60 seconds
-
(Advanced) Filtering condition - condition to add to the where clause (use InfluxDB query syntax)
Pipeline forms a query SELECT {dimensions},{values} FROM {measurement} WHERE (time >= {last_timestamp} AND time < {last_timestamp}+{interval} and time < now()-{delay}) AND ({filtering_condition})
and runs it every n seconds defined in the interval config. Last processed timestamp is saved back to influxdb (measurement agent_timestamps
, same db used in the pipeline)
> agent pipeline create
Choose source config (influx_source) [influx_source]:
Choose destination (http) [http]:
Pipeline ID (must be unique): influx_cpu
Measurement name: cpu
Value columns names: usage_active usage_idle
Target type (counter, gauge) [gauge]:
Dimensions: cpu host zone
Delay [0s]:
Interval, seconds [60]:
Created pipeline influx_cpu
>
> agent pipeline create -a
Choose source config (influx_source) [influx_source]:
Choose destination (http) [http]:
Pipeline ID (must be unique): influx_cpu
Measurement name: cpu
Value type (column, constant): constant
Value: 1
Target type (counter, gauge) [gauge]:
Required dimensions: cpu host
Optional dimensions: zone
Additional properties []: key1:value1 key2:value2 key3:value3
Delay [0s]: 3m
Interval, seconds [60]:
Filtering condition []: host = 'msk-air-trn1'
Created pipeline influx_cpu
Property | Required | Property name in config file | Value type in config file | Description |
---|---|---|---|---|
Source | yes | source | String | Source config name |
Pipeline ID | yes | pipeline_id |
String | Unique pipeline identifier (use human-readable name so you could easily use it further) |
Measurement name | yes | measurement_name |
String | metric name in InfluxDB from which to make query. Also this is added as dimension measurement_category to result |
Value | yes | value |
List of strings | Value column names, columns may be only numeric |
Target type | no | target_type |
String | represents how samples of the same metric are aggregated in Anodot. Valid values are: gauge (average aggregation), counter (sum aggregation) |
Dimensions | yes | dimensions |
List of strings | Names of columns which will be used as dimensions, columns may be only string. These fields may be missing in a record |
Additional properties | no | properties |
Object with key-value pairs | Additional properties with static values to pass to Anodot as dimensions. |
Delay | no | delay |
String | how many time to wait until data has arrived, default 0. Format: Number + unit, example: 10s, 15m, 1h |
Interval | no | interval |
Integer | How often to make a query, integer, seconds. Default - 60 seconds |
Filtering condition | no | filtering |
String | condition to add to the where clause (use InfluxDB query syntax) |
When editing pipelines only pipeline_id
is required.
Advanced configs
- Value - value can be constant, just put a number instead of list
{
"value": 1
}
- Dimensions - You can specify required and optional dimensions. If required column is missing in the record, it goes to error stage. If optional dimension is missing - record goes to further processing
{
"dimensions": {
"required": ["cpu", "host"],
"optional": ["zone"]
}
}
Simple:
{
"source": "test_influx",
"pipeline_id": "test_influx_file_short",
"measurement_name": "cpu_test",
"value": ["usage_active", "usage_idle"],
"dimensions": ["cpu", "host", "zone"]
}
Advanced:
{
"source": "test_influx",
"pipeline_id": "test_influx_file_full",
"measurement_name": "cpu_test",
"value": 1,
"dimensions": {
"required": ["cpu", "host"],
"optional": ["zone"]
},
"target_type": "gauge",
"properties": {"key1": "value1", "key2": "value2", "key3": "value3"},
"delay": "10m",
"interval": "300",
"filtering": "zone == 'GF'"
}
- Home
- CLI reference
- API
- Kubernetes setup using Helm
- Podman setup
- Creating pipelines
- Test sources
- Data formats (JSON, CSV, AVRO, LOG)
- How to parse logs with grok patterns
- How to store sensitive information
- Automated pipelines creation
- Filtering
- Transformation files
- Fields
- DVP Configuration
- Integrations
- Sending events to Anodot