Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[documentation] upd tcp and kafka plugin docs #1312

Merged
5 changes: 3 additions & 2 deletions pipeline/inputs/tcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,16 @@ The **tcp** input plugin allows to retrieve structured JSON or raw messages over

The plugin supports the following configuration parameters:

| Key | Description | Default |
| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| Key | Description | Default |
| ------------ | ----------- | ------- |
| Listen | Listener network interface. | 0.0.0.0 |
| Port | TCP port where listening for connections | 5170 |
| Buffer\_Size | Specify the maximum buffer size in KB to receive a JSON message. If not set, the default size will be the value of _Chunk\_Size_. | |
| Chunk\_Size | By default the buffer to store the incoming JSON messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by _Chunk\_Size_ in KB. If not set, _Chunk\_Size_ is equal to 32 (32KB). | 32 |
| Format | Specify the expected payload format. It support the options _json_ and _none_. When using _json_, it expects JSON maps, when is set to _none_, it will split every record using the defined _Separator_ (option below). | json |
| Separator | When the expected _Format_ is set to _none_, Fluent Bit needs a separator string to split the records. By default it uses the breakline character (LF or 0x10). | |
| Source\_Address\_Key| Specify the key where the source address will be injected. | |
| Threaded | Improve data ingestion performance by letting Fluent Bit handle incoming data in parallel across multiple dedicated threads. | off |

## Getting Started

Expand Down
1 change: 1 addition & 0 deletions pipeline/inputs/udp.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ The plugin supports the following configuration parameters:
| Format | Specify the expected payload format. It support the options _json_ and _none_. When using _json_, it expects JSON maps, when is set to _none_, it will split every record using the defined _Separator_ (option below). | json |
| Separator | When the expected _Format_ is set to _none_, Fluent Bit needs a separator string to split the records. By default it uses the breakline character (LF or 0x10). | |
| Source\_Address\_Key| Specify the key where the source address will be injected. | |
| Threaded | Improve data ingestion performance by letting Fluent Bit handle incoming data in parallel across multiple dedicated threads. | off |

## Getting Started

Expand Down
1 change: 1 addition & 0 deletions pipeline/outputs/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ Kafka output plugin allows to ingest your records into an [Apache Kafka](https:/
| queue\_full\_retries | Fluent Bit queues data into rdkafka library, if for some reason the underlying library cannot flush the records the queue might fills up blocking new addition of records. The `queue_full_retries` option set the number of local retries to enqueue the data. The default value is 10 times, the interval between each retry is 1 second. Setting the `queue_full_retries` value to `0` set's an unlimited number of retries. | 10 |
| rdkafka.{property} | `{property}` can be any [librdkafka properties](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md) | |
| raw\_log\_key | When using the raw format and set, the value of raw\_log\_key in the record will be send to kafka as the payload. | |
| workers | This setting improves the throughput and performance of data forwarding by enabling concurrent data processing and transmission to the kafka output broker destination. | 0 |

> Setting `rdkafka.log.connection.close` to `false` and `rdkafka.request.required.acks` to 1 are examples of recommended settings of librdfkafka properties.

Expand Down