Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lokiexporter: Empty logs sended from otel collector #2529

Closed
gillg opened this issue Mar 2, 2021 · 28 comments
Closed

lokiexporter: Empty logs sended from otel collector #2529

gillg opened this issue Mar 2, 2021 · 28 comments
Assignees
Labels
bug Something isn't working exporter/loki Loki Exporter

Comments

@gillg
Copy link
Contributor

gillg commented Mar 2, 2021

Describe the bug
Otel Collector receive logs from fluent-bit, process them as batch, and export them to console + loki.
On console everything seems good, but on loki they are "empty"

image
All my logs have no "body" or any attributes.

Console exporter output :

LogRecord open-telemetry/opentelemetry-collector#7
Timestamp: 1614706050024387900
Severity:
ShortName:
Body: <Unknown OpenTelemetry attribute value type "NULL">
Attributes:
     -> RecordNumber: INT(134181)
     -> TimeGenerated: STRING(2021-03-02 18:27:29 +0100)
     -> TimeWritten: STRING(2021-03-02 18:27:29 +0100)
     -> EventID: INT(403)
     -> EventType: STRING(Information)
     -> EventCategory: INT(4)
     -> Channel: STRING(Windows PowerShell)
     -> SourceName: STRING(PowerShell)
     -> ComputerName: STRING([OBFUSCATED])
     -> Data: STRING()
     -> Sid: STRING()
     -> Message: STRING(Engine state is changed from Available to Stopped.

Details:
        NewEngineState=Stopped
        PreviousEngineState=Available

        SequenceNumber=15
[OBFUSCATED]
)
     -> StringInserts: STRING([OBFUSCATED])
     -> hostname: STRING([OBFUSCATED])
     -> instance: STRING([OBFUSCATED])
     -> job: STRING(winlog)
     -> fluent.tag: STRING(winevent.log)

Steps to reproduce

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
  fluentforward:
    endpoint: 0.0.0.0:8006

exporters:
  logging:
  loki:
    endpoint: http://loki:3100/loki/api/v1/push
    # Whitelist of labels. If you try to push an unlisted label the entier log will be dropped
    labels:
      attributes:
        severity: ""
        Type: "severity"
        EventType: ""
        hostname: "instance"
        instance: ""
        job: ""
        #fluent.tag: "job"

processors:
  batch:

extensions:
  health_check:

service:
  extensions: [health_check]
  pipelines:
    logs:
      receivers: [otlp, fluentforward]
      processors: [batch]
      exporters: [logging, loki]

What did you expect to see?
Logs on loki...
Exemple with fluent-bit > fluetnd > loki
image

What did you see instead?
See above

What version did you use?
Loki : grafana/loki:master (2.1.0+)
Otel collector: otel/opentelemetry-collector-contrib:latest

@gillg gillg added the bug Something isn't working label Mar 2, 2021
@gillg
Copy link
Contributor Author

gillg commented Mar 2, 2021

Maybe we should reuse the mechanism inspired from fluentd loki plugin.
https://github.com/grafana/loki/blob/master/cmd/fluentd/lib/fluent/plugin/out_loki.rb#L151

I have the feeling than otel doesn't sends any "OTEL Attributes" to loki.

@gillg
Copy link
Contributor Author

gillg commented Mar 2, 2021

Hum... I understand, FluentForward protocol seems not have a "raw" log body.
It contains only attributes, but an attribute can be "message" and containing the raw log.
OTEL collector take this assumption during the collect : https://github.com/open-telemetry/opentelemetry-collector/blob/73db88faef0a99dfb3317227052fea5299d52ec5/receiver/fluentforwardreceiver/conversion.go#L164

In my case, I use Winlog input (https://github.com/fluent/fluent-bit/tree/master/plugins/in_winlog) and this plugin parse the win eventlog then forward all fields without adding a custom field message.
A serialized view of each entry (in JSON for exemple) should be the real log.

So, I have some questions to discuss.

  1. Do we have to add a full attributes serialized body in this else ? https://github.com/open-telemetry/opentelemetry-collector/blob/73db88faef0a99dfb3317227052fea5299d52ec5/receiver/fluentforwardreceiver/conversion.go#L175
    Arbitrary in JSON ? What about the case where you have "message" + other attributes ?

  2. Do we have to force a non empty body here ?

    func convertLogToLokiEntry(lr pdata.LogRecord) *logproto.Entry {

    It seems usefull to avoid unconsistent / useless entries on Loki server. Loki has no concepts of "attributes" in database or during logs push. You have "labels" which are used to query logs, but the body is raw and can be parsed during the query phase to extract attributes on the fly. Natively it supports json / logfmt / custom regex.
    I think we should always create a new serialized message combinating all attributes and the potential current body in a new attribute body (for example). If we don't do that, all enriched attributes by the collector will be losed on loki side.

@gillg
Copy link
Contributor Author

gillg commented Mar 3, 2021

FluentReceiver related issue here : #14718

gillg added a commit to gillg/opentelemetry-collector-contrib that referenced this issue Mar 3, 2021
@gramidt
Copy link
Member

gramidt commented Mar 3, 2021

Thank you for filing this issue, @gillg!

Hum... I understand, FluentForward protocol seems not have a "raw" log body.
It contains only attributes, but an attribute can be "message" and containing the raw log.
OTEL collector take this assumption during the collect : https://github.com/open-telemetry/opentelemetry-collector/blob/73db88faef0a99dfb3317227052fea5299d52ec5/receiver/fluentforwardreceiver/conversion.go#L164

In my case, I use Winlog input (https://github.com/fluent/fluent-bit/tree/master/plugins/in_winlog) and this plugin parse the win eventlog then forward all fields without adding a custom field message.
A serialized view of each entry (in JSON for exemple) should be the real log.

So, I have some questions to discuss.

  1. Do we have to add a full attributes serialized body in this else ? https://github.com/open-telemetry/opentelemetry-collector/blob/73db88faef0a99dfb3317227052fea5299d52ec5/receiver/fluentforwardreceiver/conversion.go#L175
    Arbitrary in JSON ? What about the case where you have "message" + other attributes ?

I can't speak to the 'fluentforwardreceiver', but I will find someone who can.

  1. Do we have to force a non empty body here ?

    func convertLogToLokiEntry(lr pdata.LogRecord) *logproto.Entry {

    It seems usefull to avoid unconsistent / useless entries on Loki server. Loki has no concepts of "attributes" in database or during logs push. You have "labels" which are used to query logs, but the body is raw and can be parsed during the query phase to extract attributes on the fly. Natively it supports json / logfmt / custom regex.
    I think we should always create a new serialized message combinating all attributes and the potential current body in a new attribute body (for example). If we don't do that, all enriched attributes by the collector will be losed on loki side.

I was torn on this when implementing the exporter. Based on the collector design, it is the responsibility of the receiver to properly format the data into the format collector pipelines understand. In this particular case, we're relying on the 'fluentforwarderreceiver' to ensure that the log body is properly filled prior to sending the data through the remainder of the pipeline. As you mentioned, it currently looks for "message" or "key" when creating the log body. While most of the time the data sent from Fluentd/Fluentbit contain either of these fields, it is not always the case (per your experience). This can be resolved upstream within Fluentd by transforming the data prior to sending it to the collector, but sometimes operators don't have control over that particular configuration / subsystem.

Do you feel that a mechanism within the 'fluentforwarderreceiver' that allowed you to specify how to fill the log message when one isn't present would solve the empty log message issue for you? ( #14718) Or would like to have an option within the Loki exporter to dynamically fill the log message with attributes if one is not present? Or something else?

As a Loki user, I would also love to get your thoughts on #2290 .

@gillg
Copy link
Contributor Author

gillg commented Mar 3, 2021

@gramidt Thanks for your answer, I tried to workaround at fluentbit side... But after a long battle I abandoned and decide to learn Go to make my PR ^^
In fact, there is no way to serialize all Fluent fields in one field. And the case of windows eventlog exporter is a good case of well structured log. The "log" is the serialization of all fields if we compare it with a classic text entry.
In term of OTEL spec, the distinction between log body / attributes seems obscure. Else we consider they are redundant informations (one structured and one unstructured) else we consider attributes as an enrichment/complement of log body.

  • In the first case, we could prefer attributes as raw log on the final exporter if it doesn't support both.
  • In the second case, we should join attributes + body (as in my PR) to avoid loosing informations if the final exporter doesn't support both.

On OTLP exporter (for exemple) as it handle attributes + body, we have nothing to merge or drop.

@gramidt
Copy link
Member

gramidt commented Mar 3, 2021

Thank you for the prompt followup, @gillg!

  • In the first case, we could prefer attributes as raw log on the final exporter if it doesn't support both.
  • In the second case, we should join attributes + body (as in my PR) to avoid loosing informations if the final exporter doesn't support both.

On OTLP exporter (for exemple) as it handle attributes + body, we have nothing to merge or drop.

Hmm..I'm going to need to think through this a bit more. At this time, I feel that if manipulation is being performed to properly fill the log data it must be done within the corresponding receiver or maybe even a processor. There are cases where it would make sense to do certain transformations on the exporter side, but only when it's a requirement of that particular destination and not a shared requirement across multiple exporters. In this particular case, I personally believe the 'fluentforwardreceiver' could be updated to provide the necessary configuration to fill the log body however you see fit, so that any downstream exporter can rely on the log without having to re-implement logic to fill the log body.

@gramidt
Copy link
Member

gramidt commented Mar 3, 2021

@bogdandrutu @tigrannajaryan - Could you assign this to me?

@gillg
Copy link
Contributor Author

gillg commented Mar 3, 2021

I'm pretty aggree, but if we consider OTEL Collector as a data pipeline, it's acceptable to have a different log to export, between the receiver and the exporter. OTEL logs are defined here : https://github.com/open-telemetry/opentelemetry-collector/blob/2a743aaa117f911976049628a926a6efead5e417/internal/data/protogen/logs/v1/logs.pb.go#L268, In my opinion, if the target (after the exporter) can handle OTEL attributes it's ok, but if it can't we should adapt the log to avoid any loss of information.

What is the objective to add processors on OTEL collector if it works on attributes and if you don't export attribute on your target ?

kisieland referenced this issue in kisieland/opentelemetry-collector-contrib Mar 16, 2021
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.36.0 to 1.37.1.
- [Release notes](https://github.com/golangci/golangci-lint/releases)
- [Changelog](https://github.com/golangci/golangci-lint/blob/master/CHANGELOG.md)
- [Commits](golangci/golangci-lint@v1.36.0...v1.37.1)

Signed-off-by: dependabot[bot] <[email protected]>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
@gramidt
Copy link
Member

gramidt commented Apr 5, 2021

For tracking / reference: Similar issues and potential improvements discussed to the log receivers to help address the empty log/message field:

#2851

@gillg
Copy link
Contributor Author

gillg commented Apr 13, 2021

Hello, I can't reopen my PR, but we always need to find a solution for this issue.

@gregoryfranklin
Copy link
Contributor

I'm looking at a related issue of not all the information in the LogRecord being sent to loki (only the body is currently sent, and only if it is a string).

The way I'm proposing to address this is by adding an "encoding" config parameter with possible values of "json" or "none". "none" would cover the existing behavior. "json" would json encode the entire LogRecord so it would look something like:

{
  "name": "example",
  "body": "example log message",
  "traceid": "abcdef",
  "attributes": {
    "key": "value",
  }
}

ljmsc referenced this issue in ljmsc/opentelemetry-collector-contrib Feb 21, 2022
* un-escape url coding when parsing baggage.

* Added changelog

Co-authored-by: Aaron Clawson <[email protected]>
Co-authored-by: Tyler Yahn <[email protected]>
@alfianabdi
Copy link

Hi, I think I have similar problem.

These are the output from fluentbit

kube.var.log.containers.ebs-csi-node-qszkk_kube-system_liveness-probe-3afec9c3e55f3324d1e48921c4f5c2cd96092fde07bc8f1dda2ae250d4fc9449.log: [
    1651634011.502081803, {"logtag"=>"F", "message"=>"I0504 03:13:31.501943       1 connection.go:153] Connecting to unix:///csi/csi.sock", 
    "kubernetes"=>{
        "pod_name"=>"ebs-csi-node-qszkk", 
        "namespace_name"=>"kube-system", 
        "container_name"=>"liveness-probe", 
        "docker_id"=>"3afec9c3e55f3324d1e48921c4f5c2cd96092fde07bc8f1dda2ae250d4fc9449", 
        "container_image"=>"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0"
        }
    }
]

These are the sample logging output from collector

Trace ID:
Span ID:
Flags: 0
LogRecord open-telemetry/opentelemetry-collector#555
Timestamp: 2022-05-04 08:25:47.129501928 +0000 UTC
Severity:
ShortName:
Body:      -> pod_name: STRING(aws-collector-7b4679c86c-74h6j)
Attributes:
    -> logtag: STRING(F)
    -> kubernetes: MAP({
        -> container_image: STRING(123456789012.dkr.ecr.ap-southeast-1.amazonaws.com/amazon/awscollector:v0.17.0-amd64)
        -> container_name: STRING(aws-collector)
        -> docker_id: STRING(e9f49a35827e67e113fcaa0907c7ffaef7a73a44796e12066191838f416f98a0)
        -> namespace_name: STRING(aws-otel)
        -> pod_name: STRING(aws-collector-7b4679c86c-74h6j)
    })
    -> fluent.tag: STRING(kube.var.log.containers.aws-collector-7b4679c86c-74h6j_aws-otel_aws-collector-e9f49a35827e67e113fcaa0907c7ffaef7a73a44796e12066191838f416f98a0.log)

Trace ID:
Span ID:
Flags: 0
LogRecord open-telemetry/opentelemetry-collector#556
Timestamp: 2022-05-04 08:25:47.129505253 +0000 UTC
Severity:
ShortName:
Body: })
Attributes:
    -> logtag: STRING(F)
    -> kubernetes: MAP({
        -> container_image: STRING(123456789012.dkr.ecr.ap-southeast-1.amazonaws.com/amazon/awscollector:v0.17.0-amd64)
        -> container_name: STRING(aws-collector)
        -> docker_id: STRING(e9f49a35827e67e113fcaa0907c7ffaef7a73a44796e12066191838f416f98a0)
        -> namespace_name: STRING(aws-otel)
        -> pod_name: STRING(aws-collector-7b4679c86c-74h6j)
    })
    -> fluent.tag: STRING(kube.var.log.containers.aws-collector-7b4679c86c-74h6j_aws-otel_aws-collector-e9f49a35827e67e113fcaa0907c7ffaef7a73a44796e12066191838f416f98a0.log)

The body part is not consistent.

@jpkrohling jpkrohling assigned jpkrohling and unassigned gramidt Jul 19, 2022
@jpkrohling
Copy link
Member

I'll give this a try, but I believe it should be possible to have entries without body. Could you confirm which format you used for this? Was it json, or was it "body"?

@github-actions
Copy link
Contributor

github-actions bot commented Dec 5, 2022

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

@github-actions github-actions bot added the Stale label Dec 5, 2022
@jpkrohling
Copy link
Member

@mar4uk, are you able to confirm that we are able to ingest log entries without a body in the Loki exporter?

@jpkrohling jpkrohling added exporter/loki Loki Exporter and removed Stale labels Dec 5, 2022
@jpkrohling jpkrohling removed their assignment Dec 5, 2022
@jpkrohling
Copy link
Member

@mar4uk, you might have missed this in the middle of the notification storm over the holidays :-)

@mar4uk
Copy link
Contributor

mar4uk commented Jan 25, 2023

sorry for the delay, I will take a look at it soon
you can assign it to me

@mar4uk
Copy link
Contributor

mar4uk commented Jan 30, 2023

@mar4uk, are you able to confirm that we are able to ingest log entries without a body in the Loki exporter?

yes, we are able to ingest log entries without a body in the Loki exporter

Config example:

receivers:
  otlp:
    protocols:
      http:

exporters:
  logging:
    verbosity: detailed

  loki:
    endpoint: "http://localhost:3100/loki/api/v1/push"
service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: []
      exporters: [logging,loki]

curl:

curl -i -X POST -H "Content-Type: application/json" "http://localhost:4318/v1/logs" -d '{
    "resource_logs": [
        {
            "resource": {
                "attributes": [
                    {
                        "key": "service.name",
                        "value":{
                            "string_value":"my-app"
                        }
                    }
                ]
            },
            "scope_logs": [
                {
                    "scope": {"version":"1"},
                    "log_records":[
                        {
                            "time_unix_nano":1675091081000000000,
                            "attributes":[
                                {
                                    "key": "event.domain",
                                    "value":{
                                        "string_value":"browser"
                                    }
                                }
                            ]
                        }
                    ]
                }
            ]
        }
    ]
}'

HTTP/1.1 200 OK

This is how log entry is shown in Loki (no body field, log entry is correct):
Screenshot 2023-01-30 at 15 09 13

@mar4uk
Copy link
Contributor

mar4uk commented Jan 30, 2023

@alfianabdi the example in your comment looks like the problem on the receiver side, not on the exporter side, right?
@gillg Do you think we can close this issue? It doesn't look like there is a problem on the loki exporter side, or have I misunderstood something?

@jpkrohling
Copy link
Member

Perhaps this is related to open-telemetry/opentelemetry-collector#7009 ?

@Prims47
Copy link

Prims47 commented Feb 14, 2023

Hello :)

I have similar issue than @alfianabdi .

This is my config:

Kind config

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: kindest/node:v1.21.14@sha256:9d9eb5fb26b4fbc0c6d95fa8c790414f9750dd583f5d7cee45d92e8c26670aa1
- role: worker
  image: kindest/node:v1.21.14@sha256:9d9eb5fb26b4fbc0c6d95fa8c790414f9750dd583f5d7cee45d92e8c26670aa1
- role: worker
  image: kindest/node:v1.21.14@sha256:9d9eb5fb26b4fbc0c6d95fa8c790414f9750dd583f5d7cee45d92e8c26670aa1
- role: worker
  image: kindest/node:v1.21.14@sha256:9d9eb5fb26b4fbc0c6d95fa8c790414f9750dd583f5d7cee45d92e8c26670aa1

I install fluent bit / opentelemetry / loki via Helm chart.

Fluent bit Configuration

Image: 2.0.9-debug

[SERVICE]
        Daemon Off
        Flush 5
        Log_Level debug
        Parsers_File parsers.conf
        Parsers_File custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port 2020
        Health_Check On
        
[INPUT]
        Name tail
        Path /var/log/containers/*.log
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On
        Parser docker
        
[FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Keep_Log Off
        K8S-Logging.Parser  On
        K8S-Logging.Exclude On
        Buffer_Size 32K
        Merge_Log_Key log_processed
        Kube_URL  https://kubernetes.default.svc.cluster.local:443
        
[OUTPUT]
       Name         opentelemetry
       Match        *
       Host           my-collector-collector.opentelemetry-operator-system.svc.cluster.local
       Port            4318
       Logs_uri     /v1/logs
       batch_size 128
       add_label   app fluent-bit

Opentelemetry mode daemonset

Image: v0.67.0

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: my-collector
  namespace: opentelemetry-operator-system
spec:
  mode: daemonset
  config: |
    receivers:
      otlp:
        protocols:
          http:
    processors:

    exporters:
      logging:
        loglevel: debug

      loki:
        endpoint: "http://loki.monitoring.svc.cluster.local:3100/loki/api/v1/push"

    service:
      pipelines:
        logs:
          receivers: [otlp]
          processors: []
          exporters: [logging, loki]

Simple pod

apiVersion: v1
kind: Pod
metadata:
  name: example
  namespace: monitoring
spec:
  containers:
  - name: example
    image: alpine
    args: [/bin/sh, -c, 'while true; do echo hello $(date); sleep 3; done']

Fluent bit Log

[0] kube.var.log.containers.example_monitoring_example-9fd0cf198aa38e7aba94eaa3bb121a86dd5c4c3fe2c9027886d3cc006d489704.log: [1676382423.552544485, {"log"=>"2023-02-14T13:47:03.55244747Z stdout F hello Tue Feb 14 13:47:03 UTC 2023", "kubernetes"=>{"pod_name"=>"example", "namespace_name"=>"monitoring", "pod_id"=>"29a1f774-17d2-4baf-b68a-b93f93a774bc", "annotations"=>{"kubectl.kubernetes.io/last-applied-configuration"=>"{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"example","namespace":"monitoring"},"spec":{"containers":[{"args":["/bin/sh","-c","while true; do echo hello $(date); sleep 3; done"],"image":"alpine","name":"example"}]}}
"}, "host"=>"kind-worker2", "container_name"=>"example", "docker_id"=>"9fd0cf198aa38e7aba94eaa3bb121a86dd5c4c3fe2c9027886d3cc006d489704", "container_hash"=>"docker.io/library/alpine@sha256:69665d02cb32192e52e07644d76bc6f25abeb5410edc1c7a81a10ba3f0efb90a", "container_image"=>"docker.io/library/alpine:latest"}}]
[2023/02/14 13:47:04] [ info] [output:opentelemetry:opentelemetry.0] my-collector-collector.opentelemetry-operator-system.svc.cluster.local:4318, HTTP status=200


[0] kube.var.log.containers.example_monitoring_example-9fd0cf198aa38e7aba94eaa3bb121a86dd5c4c3fe2c9027886d3cc006d489704.log: [1676382426.554653497, {"log"=>"2023-02-14T13:47:06.554499265Z stdout F hello Tue Feb 14 13:47:06 UTC 2023", "kubernetes"=>{"pod_name"=>"example", "namespace_name"=>"monitoring", "pod_id"=>"29a1f774-17d2-4baf-b68a-b93f93a774bc", "annotations"=>{"kubectl.kubernetes.io/last-applied-configuration"=>"{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"example","namespace":"monitoring"},"spec":{"containers":[{"args":["/bin/sh","-c","while true; do echo hello $(date); sleep 3; done"],"image":"alpine","name":"example"}]}}
"}, "host"=>"kind-worker2", "container_name"=>"example", "docker_id"=>"9fd0cf198aa38e7aba94eaa3bb121a86dd5c4c3fe2c9027886d3cc006d489704", "container_hash"=>"docker.io/library/alpine@sha256:69665d02cb32192e52e07644d76bc6f25abeb5410edc1c7a81a10ba3f0efb90a", "container_image"=>"docker.io/library/alpine:latest"}}]
[2023/02/14 13:47:07] [ info] [output:opentelemetry:opentelemetry.0] my-collector-collector.opentelemetry-operator-system.svc.cluster.local:4318, HTTP status=200

Opentelemetry logs

2023-02-14T13:47:28.183Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "logging", "#logs": 1}
2023-02-14T13:47:28.183Z	info	ResourceLog #0
Resource SchemaURL:
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope
LogRecord #0
ObservedTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2023-02-14 13:47:27.54998508 +0000 UTC
SeverityText:
SeverityNumber: Unspecified(0)
Body: Map({"kubernetes":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"name\":\"example\",\"namespace\":\"monitoring\"},\"spec\":{\"containers\":[{\"args\":[\"/bin/sh\",\"-c\",\"while true; do echo hello $(date); sleep 3; done\"],\"image\":\"alpine\",\"name\":\"example\"}]}}\n"},"container_hash":"docker.io/library/alpine@sha256:69665d02cb32192e52e07644d76bc6f25abeb5410edc1c7a81a10ba3f0efb90a","container_image":"docker.io/library/alpine:latest","container_name":"example","docker_id":"9fd0cf198aa38e7aba94eaa3bb121a86dd5c4c3fe2c9027886d3cc006d489704","host":"kind-worker2","namespace_name":"monitoring","pod_id":"29a1f774-17d2-4baf-b68a-b93f93a774bc","pod_name":"example"},"log":"2023-02-14T13:47:27.549838646Z stdout F hello Tue Feb 14 13:47:27 UTC 2023"})
Trace ID:
Span ID:
Flags: 0

	{"kind": "exporter", "data_type": "logs", "name": "logging"}
2023-02-14T13:47:31.193Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "logging", "#logs": 1}
2023-02-14T13:47:31.193Z	info	ResourceLog #0
Resource SchemaURL:
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope
LogRecord #0
ObservedTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2023-02-14 13:47:30.553090592 +0000 UTC
SeverityText:
SeverityNumber: Unspecified(0)
Body: Map({"kubernetes":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"name\":\"example\",\"namespace\":\"monitoring\"},\"spec\":{\"containers\":[{\"args\":[\"/bin/sh\",\"-c\",\"while true; do echo hello $(date); sleep 3; done\"],\"image\":\"alpine\",\"name\":\"example\"}]}}\n"},"container_hash":"docker.io/library/alpine@sha256:69665d02cb32192e52e07644d76bc6f25abeb5410edc1c7a81a10ba3f0efb90a","container_image":"docker.io/library/alpine:latest","container_name":"example","docker_id":"9fd0cf198aa38e7aba94eaa3bb121a86dd5c4c3fe2c9027886d3cc006d489704","host":"kind-worker2","namespace_name":"monitoring","pod_id":"29a1f774-17d2-4baf-b68a-b93f93a774bc","pod_name":"example"},"log":"2023-02-14T13:47:30.552787891Z stdout F hello Tue Feb 14 13:47:30 UTC 2023"})
Trace ID:
Span ID:
Flags: 0
	{"kind": "exporter", "data_type": "logs", "name": "logging"}

It's strange because Fluentbit seems to send the data to opentelemetry but not formatted correctly ? (No attributes)

This is how log entry is shown in Loki:
screen

Do you think i need to play with the lokiexporter via resource / attributes ?

It will be great to filter my logs with filters container / pod ...

@mar4uk
Copy link
Contributor

mar4uk commented Feb 14, 2023

@Prims47

It's strange because Fluentbit seems to send the data to opentelemetry but not formatted correctly ? (No attributes)

yes, it seems so. All data is sent within the body

It will be great to filter my logs with filters container / pod ...

you can filter your logs using json parser. The query will look something like this:
{exporter="OTLP"} | json | body_kubernetes_container_name= `example`

@mar4uk
Copy link
Contributor

mar4uk commented Feb 15, 2023

Perhaps this is related to open-telemetry/opentelemetry-collector#7009 ?

I don't think it is related. That issue is about sending an empty body to the collector endpoint. Whereas this issue is about sending the empty body of Loki entry.

The current logic of loki exporter is correct: if loki exporter gets a LogRecord with an empty Body it sends loki entry with an empty body to Loki. The responsibility of filling LogRecord Body is on the receiver or processor side

I think this issue could be closed.

@Prims47
Copy link

Prims47 commented Feb 20, 2023

Thanks @mar4uk 😄

Now i need to check why my Fluentbit label add inside my Fluentbit conf doesn't work.

@mar4uk
Copy link
Contributor

mar4uk commented Apr 3, 2023

@gillg loki exporter was updated, and new formats were added: json, logfmt, raw. They probably solve the initial issue.
Could you please confirm if the issue was solved? Can I close it?

@gillg
Copy link
Contributor Author

gillg commented Apr 3, 2023

I think we are good indeed.
But I still think fluentreciever or wineventlog should be rewriten to follow the logs guidelines and fill attributes instead of a structured body... But it's another subject.

@gillg gillg closed this as completed Apr 3, 2023
@mar4uk
Copy link
Contributor

mar4uk commented Apr 4, 2023

it seems that for the fluentreceiver the issue was created some time ago #2870

@atoulme
Copy link
Contributor

atoulme commented Apr 4, 2023

Help and patches are very welcome. Happy to review if you have improvements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working exporter/loki Loki Exporter
Projects
None yet
Development

No branches or pull requests

8 participants