Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requests failing with 503 errors in OTel Load Balancer, no traces in logs #35512

Open
bvsvas opened this issue Oct 1, 2024 · 7 comments
Open
Assignees
Labels
bug Something isn't working exporter/loadbalancing

Comments

@bvsvas
Copy link

bvsvas commented Oct 1, 2024

Component(s)

exporter/loadbalancing

What happened?

Description

Problem:
We are observing intermittent 503 errors at the OTel Load Balancer pod. No logs are generated for these failures, but internal telemetry (http_server_request_size) shows metrics with HTTP status code 503 from the OTel Load Balancer. The error is visible in the OTel Demo app's collector.

Error Message (from OTel Demo app's otelcol pod):
Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "metrics", "name": "otlphttp/withauth", "error": "Throttle (0s), error: rpc error: code = Unavailable desc = error exporting items, request to http://otel-gateway.<IP>.nip.io:80/v1/metrics responded with HTTP Status Code 503, Message=unable to get service name, Details=[]", "interval": "27.51009548s"}

Steps to Reproduce

OTel Demo > API Gateway (Internal) > OTel Load Balancer > OTel Collector
We are using a single replica of both the OTel Load Balancer and OTel Collector for testing purposes.
Note: Nginx Ingress can be used in place of internal API Gateway

OTel Load Balancer Configuration:

  • OTLP Receiver (http/grpc)
  • Resource Attribute Processor
  • LoadBalancer Exporter (using k8s resolver with proper role and role bindings)

OTel Collector Configuration:

  • OTLP Receiver (http/grpc)
  • Memory Limiter Processor
  • Custom Exporter

Configuration Details:
OTel LoadBalancer and OTel Collector configurations are attached in configuration section.

Note: We can see it's working when we hit request using Postman manually.

Expected Result

All requests should be processed without an error as downstream OTel Collector is available, up and running. OTel Demo app should be reporting properly.

Actual Result

Few requests fails with 503 without any error logged by loadbalancing exporter

Collector version

v0.109.0

Environment information

Environment

OS: CentOS v8
GoLang: 1.23
OTel Collector: v0.109.0

OpenTelemetry Collector configuration

## OTel LoadBalancer Config:

extensions:
  health_check:
    endpoint: ${env:OTEL_POD_IP}:13133
  pprof:
    endpoint: :1777
exporters:
  loadbalancing:
    routing_key: "service"
    protocol:
      otlp:
        timeout: 60s
        tls:
          insecure: true
          insecure_skip_verify: true
        read_buffer_size: 2048576
        write_buffer_size: 2048576
        keepalive:
          time: 600m
          timeout: 30s
        sending_queue:
          enabled: true
          num_consumers: 100
          queue_size: 300000
        retry_on_failure:
          enabled: true
          initial_interval: 30s
          max_interval: 120s
          max_elapsed_time: 15m
    resolver:
      k8s:
        service: otelcol-ingester-service
        ports:
        - 4317
processors:
  memory_limiter:
    limit_mib: 6500
  resource:
    attributes:
    - key: authorization
      from_context: authorization
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: ${env:OTEL_POD_IP}:4317
        max_recv_msg_size_mib: 32
        max_concurrent_streams: 32
        read_buffer_size: 2048576
        write_buffer_size: 2048576
        keepalive:
          server_parameters:
            time: 600m
            timeout: 30s
      http:
        endpoint: ${env:OTEL_POD_IP}:4318
service:
  telemetry:
    logs:
      level: debug
      output_paths: 
      - stdout
    metrics:
      level: detailed
      address: ":8888"
  extensions:
    - health_check
    - pprof
  pipelines:
    metrics:
      exporters:
        - loadbalancing
      processors:
        - memory_limiter
        - resource
      receivers:
        - otlp
        - prometheus
    traces:
      exporters:
        - loadbalancing
      processors:
        - memory_limiter
        - resource
      receivers:
        - otlp


## OTel Collector Config:

extensions:
  health_check:
    endpoint: ${env:OTEL_POD_IP}:13133
exporters:
  apmexporter:
    endpoint: http://apmservices-gateway:8004
processors:
  memory_limiter:
    limit_mib: 6500
  probabilistic_sampler:
    sampling_percentage: 30
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: ${env:OTEL_POD_IP}:4317
      http:
        endpoint: ${env:OTEL_POD_IP}:4318
service:
  telemetry:
    logs:
      level: info
      output_paths: 
      - stdout
    metrics:
      level: detailed
      address: ":8888"
  extensions:
    - health_check
  pipelines:
    metrics:
      exporters:
        - apmexporter
      processors:
        - memory_limiter
        - cumulativetodelta
      receivers:
        - otlp
        - prometheus
    traces:
      exporters:
        - apmexporter
      processors:
        - memory_limiter
      receivers:
        - otlp 

Log output

2024-10-01T09:47:02.156Z        info    [email protected]/service.go:129 Setting up own telemetry...
2024-10-01T09:47:02.156Z        warn    [email protected]/service.go:196 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers
2024-10-01T09:47:02.156Z        info    [email protected]/telemetry.go:98        Serving metrics {"address": ":8888", "metrics level": "Detailed"}
2024-10-01T09:47:02.156Z        info    builders/builders.go:26 Development component. May change in the future.        {"kind": "exporter", "data_type": "metrics", "name": "loadbalancing"}
2024-10-01T09:47:02.158Z        info    [email protected]/resolver_k8s.go:94       the namespace for the Kubernetes service wasn't provided, trying to determine the current namespace     {"kind": "exporter", "data_type": "metrics", "name": "loadbalancing", "resolver": "k8s service", "name": "dx-otelcol-ingester-service"}
2024-10-01T09:47:02.159Z        info    [email protected]/resolver_k8s.go:97       namespace for the Collector determined  {"kind": "exporter", "data_type": "metrics", "name": "loadbalancing", "resolver": "k8s service", "namespace": "dxi"}
2024-10-01T09:47:02.159Z        debug   builders/builders.go:24 Beta component. May change in the future.       {"kind": "processor", "name": "resource", "pipeline": "metrics"}
2024-10-01T09:47:02.159Z        debug   builders/builders.go:24 Beta component. May change in the future.       {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics"}
2024-10-01T09:47:02.159Z        info    memorylimiter/memorylimiter.go:75       Memory limiter configured       {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "limit_mib": 6500, "spike_limit_mib": 700, "check_interval": 0.5}
2024-10-01T09:47:02.159Z        debug   builders/builders.go:24 Stable component.       {"kind": "receiver", "name": "otlp", "data_type": "metrics"}
2024-10-01T09:47:02.159Z        debug   builders/builders.go:24 Beta component. May change in the future.       {"kind": "exporter", "data_type": "traces", "name": "loadbalancing"}
2024-10-01T09:47:02.160Z        info    [email protected]/resolver_k8s.go:94       the namespace for the Kubernetes service wasn't provided, trying to determine the current namespace     {"kind": "exporter", "data_type": "traces", "name": "loadbalancing", "resolver": "k8s service", "name": "dx-otelcol-ingester-service"}
2024-10-01T09:47:02.160Z        info    [email protected]/resolver_k8s.go:97       namespace for the Collector determined  {"kind": "exporter", "data_type": "traces", "name": "loadbalancing", "resolver": "k8s service", "namespace": "dxi"}
2024-10-01T09:47:02.160Z        debug   builders/builders.go:24 Beta component. May change in the future.       {"kind": "processor", "name": "resource", "pipeline": "traces"}
2024-10-01T09:47:02.160Z        debug   builders/builders.go:24 Beta component. May change in the future.       {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2024-10-01T09:47:02.160Z        debug   builders/builders.go:24 Beta component. May change in the future.       {"kind": "processor", "name": "memory_limiter", "pipeline": "traces"}
2024-10-01T09:47:02.160Z        debug   builders/builders.go:24 Stable component.       {"kind": "receiver", "name": "otlp", "data_type": "traces"}
2024-10-01T09:47:02.160Z        debug   builders/extension.go:48        Beta component. May change in the future.       {"kind": "extension", "name": "health_check"}
2024-10-01T09:47:02.160Z        debug   builders/extension.go:48        Beta component. May change in the future.       {"kind": "extension", "name": "pprof"}
2024-10-01T09:47:02.163Z        info    [email protected]/service.go:213 Starting apm-otelcol... {"Version": "1.0.0", "NumCPU": 16}
2024-10-01T09:47:02.163Z        info    extensions/extensions.go:39     Starting extensions...
2024-10-01T09:47:02.163Z        info    extensions/extensions.go:42     Extension is starting...        {"kind": "extension", "name": "health_check"}
2024-10-01T09:47:02.163Z        info    [email protected]/healthcheckextension.go:33        Starting health_check extension {"kind": "extension", "name": "health_check", "config": {"Endpoint":"10.233.115.180:13133","TLSSetting":null,"CORS":null,"Auth":null,"MaxRequestBodySize":0,"IncludeMetadata":false,"ResponseHeaders":null,"CompressionAlgorithms":null,"ReadTimeout":0,"ReadHeaderTimeout":0,"WriteTimeout":0,"IdleTimeout":0,"Path":"/","ResponseBody":null,"CheckCollectorPipeline":{"Enabled":false,"Interval":"5m","ExporterFailureThreshold":5}}}
2024-10-01T09:47:02.163Z        info    extensions/extensions.go:59     Extension started.      {"kind": "extension", "name": "health_check"}
2024-10-01T09:47:02.163Z        info    extensions/extensions.go:42     Extension is starting...        {"kind": "extension", "name": "pprof"}
2024-10-01T09:47:02.164Z        info    [email protected]/pprofextension.go:61    Starting net/http/pprof server  {"kind": "extension", "name": "pprof", "config": {"TCPAddr":{"Endpoint":":1777","DialerConfig":{"Timeout":0}},"BlockProfileFraction":3,"MutexProfileFraction":5,"SaveToFile":""}}
2024-10-01T09:47:02.164Z        info    extensions/extensions.go:59     Extension started.      {"kind": "extension", "name": "pprof"}
2024-10-01T09:47:02.164Z        debug   [email protected]/resolver_k8s.go:145      creating and starting endpoints informer        {"kind": "exporter", "data_type": "metrics", "name": "loadbalancing", "resolver": "k8s service"}
2024-10-01T09:47:02.265Z        debug   [email protected]/resolver_k8s.go:160      K8s service resolver started    {"kind": "exporter", "data_type": "metrics", "name": "loadbalancing", "resolver": "k8s service", "service": "dx-otelcol-ingester-service", "namespace": "dxi", "ports": [4317], "timeout": 1}
2024-10-01T09:47:02.265Z        debug   [email protected]/resolver_k8s.go:145      creating and starting endpoints informer        {"kind": "exporter", "data_type": "traces", "name": "loadbalancing", "resolver": "k8s service"}
2024-10-01T09:47:02.366Z        debug   [email protected]/resolver_k8s.go:160      K8s service resolver started    {"kind": "exporter", "data_type": "traces", "name": "loadbalancing", "resolver": "k8s service", "service": "dx-otelcol-ingester-service", "namespace": "dxi", "ports": [4317], "timeout": 1}
2024-10-01T09:47:02.366Z        info    [email protected]/server.go:684      [core] [Server #1]Server created        {"grpc_log": true}
2024-10-01T09:47:02.366Z        info    [email protected]/otlp.go:103       Starting GRPC server    {"kind": "receiver", "name": "otlp", "data_type": "metrics", "endpoint": "10.233.115.180:4317"}
2024-10-01T09:47:02.367Z        info    [email protected]/otlp.go:153       Starting HTTP server    {"kind": "receiver", "name": "otlp", "data_type": "metrics", "endpoint": "10.233.115.180:4318"}
2024-10-01T09:47:02.367Z        info    [email protected]/server.go:880      [core] [Server #1 ListenSocket #2]ListenSocket created  {"grpc_log": true}
2024-10-01T09:47:02.371Z        info    [email protected]/metrics_receiver.go:118     Starting discovery manager      {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2024-10-01T09:47:02.373Z        info    targetallocator/manager.go:175  Scrape job added        {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "dxotelcol"}
2024-10-01T09:47:02.373Z        debug   discovery/manager.go:296        Starting provider       {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "provider": "static/0", "subs": "map[dxotelcol:{}]"}
2024-10-01T09:47:02.373Z        info    healthcheck/handler.go:132      Health Check state change       {"kind": "extension", "name": "health_check", "status": "ready"}
2024-10-01T09:47:02.373Z        info    [email protected]/service.go:239 Everything is ready. Begin running and processing data.
2024-10-01T09:47:02.373Z        debug   discovery/manager.go:330        Discoverer channel closed       {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "provider": "static/0"}
2024-10-01T09:47:02.374Z        info    localhostgate/featuregate.go:63 The default endpoints for all servers in components have changed to use localhost instead of 0.0.0.0. Disable the feature gate to temporarily revert to the previous default.        {"feature gate ID": "component.UseLocalHostAsDefaultHost"}
2024-10-01T09:47:02.374Z        info    [email protected]/metrics_receiver.go:187     Starting scrape manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2024-10-01T09:47:02.660Z        debug   memorylimiter/memorylimiter.go:181      Currently used memory.  {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "cur_mem_mib": 7}
2024-10-01T09:47:03.160Z        debug   memorylimiter/memorylimiter.go:181      Currently used memory.  {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "cur_mem_mib": 7}
2024-10-01T09:47:03.660Z        debug   memorylimiter/memorylimiter.go:181      Currently used memory.  {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "cur_mem_mib": 7}
2024-10-01T09:47:03.834Z        info    [email protected]/clientconn.go:162  [core] original dial target is: "10.233.108.69:4317"    {"grpc_log": true}
2024-10-01T09:47:03.834Z        info    [email protected]/clientconn.go:440  [core] [Channel #3]Channel created      {"grpc_log": true}
2024-10-01T09:47:03.834Z        info    [email protected]/clientconn.go:193  [core] [Channel #3]parsed dial target is: resolver.Target{URL:url.URL{Scheme:"dns", Opaque:"", User:(*url.Userinfo)(nil), Host:"", Path:"/10.233.108.69:4317", RawPath:"", OmitHost:false, ForceQuery:false, RawQuery:"", Fragment:"", RawFragment:""}}      {"grpc_log": true}
2024-10-01T09:47:03.834Z        info    [email protected]/clientconn.go:194  [core] [Channel #3]Channel authority set to "10.233.108.69:4317"        {"grpc_log": true}
2024-10-01T09:47:03.835Z        info    [email protected]/clientconn.go:162  [core] original dial target is: "10.233.108.69:4317"    {"grpc_log": true}
2024-10-01T09:47:03.835Z        info    [email protected]/clientconn.go:440  [core] [Channel #4]Channel created      {"grpc_log": true}
2024-10-01T09:47:03.835Z        info    [email protected]/clientconn.go:193  [core] [Channel #4]parsed dial target is: resolver.Target{URL:url.URL{Scheme:"dns", Opaque:"", User:(*url.Userinfo)(nil), Host:"", Path:"/10.233.108.69:4317", RawPath:"", OmitHost:false, ForceQuery:false, RawQuery:"", Fragment:"", RawFragment:""}}      {"grpc_log": true}
2024-10-01T09:47:03.835Z        info    [email protected]/clientconn.go:194  [core] [Channel #4]Channel authority set to "10.233.108.69:4317"        {"grpc_log": true}
2024-10-01T09:47:04.160Z        debug   memorylimiter/memorylimiter.go:181      Currently used memory.  {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "cur_mem_mib": 25}
2024-10-01T09:47:04.660Z        debug   memorylimiter/memorylimiter.go:181      Currently used memory.  {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "cur_mem_mib": 26}
2024/10/01 09:47:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:78)
2024-10-01T09:47:04.746Z        info    [email protected]/resolver_wrapper.go:200    [core] [Channel #3]Resolver state updated: {
  "Addresses": [
    {
      "Addr": "10.233.108.69:4317",
      "ServerName": "",
      "Attributes": null,
      "BalancerAttributes": null,
      "Metadata": null
    }
  ],
  "Endpoints": [
    {
      "Addresses": [
        {
          "Addr": "10.233.108.69:4317",
          "ServerName": "",
          "Attributes": null,
          "BalancerAttributes": null,
          "Metadata": null
        }
      ],
      "Attributes": null
    }
  ],
  "ServiceConfig": null,
  "Attributes": null
} (resolver returned new addresses)     {"grpc_log": true}
2024-10-01T09:47:04.746Z        info    [email protected]/balancer_wrapper.go:107    [core] [Channel #3]Channel switches to new LB policy "pick_first"       {"grpc_log": true}
2024-10-01T09:47:04.747Z        info    gracefulswitch/gracefulswitch.go:193    [pick-first-lb] [pick-first-lb 0xc0006a85a0] Received new config {
  "shuffleAddressList": false
}, resolver state {
  "Addresses": [
    {
      "Addr": "10.233.108.69:4317",
      "ServerName": "",
      "Attributes": null,
      "BalancerAttributes": null,
      "Metadata": null
    }
  ],
  "Endpoints": [
    {
      "Addresses": [
        {
          "Addr": "10.233.108.69:4317",
          "ServerName": "",
          "Attributes": null,
          "BalancerAttributes": null,
          "Metadata": null
        }
      ],
      "Attributes": null
    }
  ],
  "ServiceConfig": null,
  "Attributes": null
}       {"grpc_log": true}
2024-10-01T09:47:04.747Z        info    [email protected]/balancer_wrapper.go:180    [core] [Channel #3 SubChannel #5]Subchannel created     {"grpc_log": true}
2024-10-01T09:47:04.747Z        info    [email protected]/clientconn.go:544  [core] [Channel #3]Channel Connectivity change to CONNECTING    {"grpc_log": true}
2024-10-01T09:47:04.747Z        info    [email protected]/clientconn.go:345  [core] [Channel #3]Channel exiting idle mode    {"grpc_log": true}
2024-10-01T09:47:04.747Z        info    [email protected]/clientconn.go:1199 [core] [Channel #3 SubChannel #5]Subchannel Connectivity change to CONNECTING   {"grpc_log": true}
2024-10-01T09:47:04.747Z        info    [email protected]/clientconn.go:1317 [core] [Channel #3 SubChannel #5]Subchannel picks a new address "10.233.108.69:4317" to connect {"grpc_log": true}
2024-10-01T09:47:04.747Z        info    pickfirst/pickfirst.go:176      [pick-first-lb] [pick-first-lb 0xc0006a85a0] Received SubConn state update: 0xc0006a8660, {ConnectivityState:CONNECTING ConnectionError:<nil> connectedAddress:{Addr: ServerName: Attributes:<nil> BalancerAttributes:<nil> Metadata:<nil>}} {"grpc_log": true}
2024-10-01T09:47:04.748Z        info    [email protected]/clientconn.go:1199 [core] [Channel #3 SubChannel #5]Subchannel Connectivity change to READY        {"grpc_log": true}
2024-10-01T09:47:04.749Z        info    pickfirst/pickfirst.go:176      [pick-first-lb] [pick-first-lb 0xc0006a85a0] Received SubConn state update: 0xc0006a8660, {ConnectivityState:READY ConnectionError:<nil> connectedAddress:{Addr:10.233.108.69:4317 ServerName:10.233.108.69:4317 Attributes:<nil> BalancerAttributes:<nil> Metadata:<nil>}}  {"grpc_log": true}

Additional context

No response

@bvsvas bvsvas added bug Something isn't working needs triage New item requiring triage labels Oct 1, 2024
Copy link
Contributor

github-actions bot commented Oct 1, 2024

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@bvsvas
Copy link
Author

bvsvas commented Oct 2, 2024

It works when we remove the LB. It also works with the OTel LB layer in place. The mystery is why a small percentage (around 3-5%) of requests are lost at the OTel LB, even though the overall traffic flow is generally successful. It doesn't give any clue in logs.

@bvsvas
Copy link
Author

bvsvas commented Oct 5, 2024

This issue with routing key service, it's working after changing routing key with resource

@atoulme atoulme removed the needs triage New item requiring triage label Oct 12, 2024
@jpkrohling jpkrohling self-assigned this Nov 27, 2024
@jpkrohling
Copy link
Member

I'm trying to wrap my head around this issue: are you saying that the connection between the LB and the Collector is failing with 503, or that only the Demo to LB is failing? Can you provide me with metrics from the LB (localhost:8888/metrics)?

In any case, I'm adding a few debug statements to the load balancer to help diagnose this kind of issue.

@Amoolaa
Copy link

Amoolaa commented Nov 28, 2024

We've had almost the same error message appear:

Exporting failed. Will retry the request after interval.  {"kind": "exporter", "data_type": "logs", "name": "otlphttp", "error": "Throttle (0s), error: rpc error: code = Unavailable desc = error exporting items, request to <my nginx ingress to loki> responded with HTTP Status Code 503", "interval": "43.026309765s"}

in this case we've replaced the API gateway with nginx, and I can see the intermittent 503s appear in nginx but not bubble up to loki, so haven't been able to track the cause. Just thought I'd throw that in there given you mentioned using nginx

@jpkrohling
Copy link
Member

Can you please provide me with the metrics? This message on its own does not indicate a problem. As it states there, the request will be retried.

@Amoolaa
Copy link

Amoolaa commented Nov 28, 2024

I don't have any metrics to share (customer has been told to turn them on - understandably very hard to debug without them). Will ask again and report here if it's still an issue

jpkrohling added a commit that referenced this issue Dec 4, 2024
…ort operation (#36575)

This adds some debug logging to the load balancing exporter, to help
identify causes of 503, reported as part of issues like #35512. The
statements should only be logged when the logging mode is set to debug,
meaning that there should not be a difference to the current behavior of
production setups.

Signed-off-by: Juraci Paixão Kröhling <[email protected]>

Signed-off-by: Juraci Paixão Kröhling <[email protected]>
shivanthzen pushed a commit to shivanthzen/opentelemetry-collector-contrib that referenced this issue Dec 5, 2024
…ort operation (open-telemetry#36575)

This adds some debug logging to the load balancing exporter, to help
identify causes of 503, reported as part of issues like open-telemetry#35512. The
statements should only be logged when the logging mode is set to debug,
meaning that there should not be a difference to the current behavior of
production setups.

Signed-off-by: Juraci Paixão Kröhling <[email protected]>

Signed-off-by: Juraci Paixão Kröhling <[email protected]>
ZenoCC-Peng pushed a commit to ZenoCC-Peng/opentelemetry-collector-contrib that referenced this issue Dec 6, 2024
…ort operation (open-telemetry#36575)

This adds some debug logging to the load balancing exporter, to help
identify causes of 503, reported as part of issues like open-telemetry#35512. The
statements should only be logged when the logging mode is set to debug,
meaning that there should not be a difference to the current behavior of
production setups.

Signed-off-by: Juraci Paixão Kröhling <[email protected]>

Signed-off-by: Juraci Paixão Kröhling <[email protected]>
sbylica-splunk pushed a commit to sbylica-splunk/opentelemetry-collector-contrib that referenced this issue Dec 17, 2024
…ort operation (open-telemetry#36575)

This adds some debug logging to the load balancing exporter, to help
identify causes of 503, reported as part of issues like open-telemetry#35512. The
statements should only be logged when the logging mode is set to debug,
meaning that there should not be a difference to the current behavior of
production setups.

Signed-off-by: Juraci Paixão Kröhling <[email protected]>

Signed-off-by: Juraci Paixão Kröhling <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working exporter/loadbalancing
Projects
None yet
Development

No branches or pull requests

4 participants