Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to connect kafka-eventsource to Kafka broker url #2684

Closed
wesleyscholl opened this issue Jun 28, 2023 · 7 comments
Closed

Unable to connect kafka-eventsource to Kafka broker url #2684

wesleyscholl opened this issue Jun 28, 2023 · 7 comments
Labels
bug Something isn't working stale

Comments

@wesleyscholl
Copy link

Describe the bug
After multiple configurations and troubleshooting, I am unable to connect the kafka-eventsource to the Kafka broker url.

To Reproduce
Steps to reproduce the behavior:

  1. Followed the setup instructions here: Argo Kafka Setup and here: Kubernetes-kafka
  2. Attempted using various urls - kafka.argo-events:9092, localhost:9092, kafka-broker:9092 and http://localhost:9092
  3. Attempted using various kafka broker configurations - Youlean kafka config and Alternative kafka configuration
  4. Published new messages in kafka to the configured topic.
  5. All unable to read new messages created in the configured Kafka topic.

Expected behavior
Argo eventsource should read new messages from the configured topic and kickoff a workflow.

Screenshots, Configurations, and Logs

Eventsource yaml
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
  name: kafka
spec:
  kafka:
    example:
      url: kafka.argo-events:9092
# attempted using kafka.argo-events:9092, localhost:9092, kafka-broker:9092 and http://localhost:9092
      topic: topic-2
# attempted using topic-1/2/3/4/5/etc and matched to each kafka message topic
      jsonBody: true
      partition: "1"
      connectionBackoff:
        duration: 10s
        steps: 5
        factor: 2
        jitter: 0.2
Kafka sensor & workflow trigger
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  name: kafka
spec:
  template:
    serviceAccountName: operate-workflow-sa
  dependencies:
    - name: test-dep
      eventSourceName: kafka
      eventName: example
  triggers:
    - template:
        name: kafka-workflow-trigger
        k8s:
          operation: create
          source:
            resource:
              apiVersion: argoproj.io/v1alpha1
              kind: Workflow
              metadata:
                generateName: kafka-workflow-
              spec:
                entrypoint: whalesay
                arguments:
                  parameters:
                  - name: message
                    value: hello world
                templates:
                - name: whalesay
                  inputs:
                    parameters:
                    - name: message
                  container:
                    image: docker/whalesay:latest
                    command: [cowsay]
                    args: ["{{inputs.parameters.message}}"]
          parameters:
            - src:
                dependencyName: test-dep
                dataKey: body
              dest: spec.arguments.parameters.0.value
Logs from EventSource/kafka
namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-28T17:38:17Z, msg=connecting to Kafka cluster...
namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-28T17:38:17Z, msg=start kafka event source...
namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-28T17:38:17Z, msg=start kafka event source...
namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-28T17:38:17Z, msg=connecting to Kafka cluster...
namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-28T17:38:17Z, msg=start kafka event source...
namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-28T17:38:17Z, msg=start kafka event source...
namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-28T17:38:17Z, msg=connecting to Kafka cluster...
The event flow configured in the UI Screenshot 2023-06-28 at 1 22 48 PM
Kafka-broker port forward Screenshot 2023-06-28 at 1 40 57 PM
Kafka messages Screenshot 2023-06-28 at 1 43 04 PM
Initial kafka yaml config
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
spec:
  selector:
    matchLabels:
      app: kafka
  serviceName: "kafka"
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: kafka
      annotations:
    spec:
      terminationGracePeriodSeconds: 30
      initContainers:
      - name: init-config
        image: solsson/kafka:initutils@sha256:8988aca5b34feabe8d7d4e368f74b2ede398f692c7e99a38b262a938d475812c
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        command: ['/bin/bash', '/etc/kafka-configmap/init.sh']
        volumeMounts:
        - name: configmap
          mountPath: /etc/kafka-configmap
        - name: config
          mountPath: /etc/kafka
        - name: extensions
          mountPath: /opt/kafka/libs/extensions
      containers:
      - name: broker
        image: solsson/kafka:2.5.1@sha256:5c52620bd8e1bcd47805eb8ca285843168e1684aa27f1ae11ce330c3e12f6b0c
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: CLASSPATH
          value: /opt/kafka/libs/extensions/*
        - name: KAFKA_LOG4J_OPTS
          value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
        - name: JMX_PORT
          value: "5555"
        ports:
        - name: inside
          containerPort: 9092
        - name: outside
          containerPort: 9094
        - name: jmx
          containerPort: 5555
        command:
        - ./bin/kafka-server-start.sh
        - /etc/kafka/server.properties.$(POD_NAME)
        lifecycle:
          preStop:
            exec:
             command: ["sh", "-ce", "kill -s TERM 1; while $(kill -0 1 2>/dev/null); do sleep 1; done"]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            # This limit was intentionally set low as a reminder that
            # the entire Yolean/kubernetes-kafka is meant to be tweaked
            # before you run production workloads
            memory: 600Mi
        readinessProbe:
          tcpSocket:
            port: 9092
          timeoutSeconds: 1
        volumeMounts:
        - name: config
          mountPath: /etc/kafka
        - name: data
          mountPath: /var/lib/kafka/data
        - name: extensions
          mountPath: /opt/kafka/libs/extensions
      volumes:
      - name: configmap
        configMap:
          name: broker-config
      - name: config
        emptyDir: {}
      - name: extensions
        emptyDir: {}
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi
Alternative kafka broker yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: kafka-broker
  name: kafka-service
  namespace: kafka
spec:
  ports:
  - port: 9092
  selector:
    app: kafka-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kafka-broker
  name: kafka-broker
  namespace: kafka
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-broker
  template:
    metadata:
      labels:
        app: kafka-broker
    spec:
      hostname: kafka-broker
      containers:
      - env:
        - name: KAFKA_BROKER_ID
          value: "1"
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: 10.43.56.121:2181
        - name: KAFKA_LISTENERS
          value: PLAINTEXT://:9092
        - name: KAFKA_ADVERTISED_LISTENERS
          value: PLAINTEXT://kafka-broker:9092
        image: wurstmeister/kafka
        imagePullPolicy: IfNotPresent
        name: kafka-broker
        ports:
        - containerPort: 9092

Environment (please complete the following information):

  • Kubernetes: v1.25.9
  • Argo: v3.4.7
  • Argo Events: v1.7.6
  • MacOS: 13.4 (22F66)

Additional context

I'm thinking it could be a networking issue, as I've had issues connecting to other kubernetes pods and docker containers. Did I configure something wrong? Please advise.


Message from the maintainers:

If you wish to see this enhancement implemented please add a 👍 reaction to this issue! We often sort issues this way to know what to prioritize.

@wesleyscholl wesleyscholl added the bug Something isn't working label Jun 28, 2023
@wesleyscholl
Copy link
Author

wesleyscholl commented Jun 29, 2023

Update

I am now connecting to the kakfa service using the following url:

<service-name>.<namespace>.svc.cluster.local:9092

And I'm now getting more output from the eventsource logs each time I post a message in kafka:

namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-29T16:56:18Z, msg=start kafka event source...
namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-29T16:56:18Z, msg=start kafka event source...
namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-29T16:56:18Z, msg=connecting to Kafka cluster...
namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-29T16:56:19Z, msg=parsing the partition value...
namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-29T16:56:19Z, msg=getting available partitions...
namespace=argo-events, eventSourceName=kafka, eventSourceType=kafka, eventName=example, level=info, time=2023-06-29T16:56:19Z, msg=verifying the partition exists within available partitions...

I also attempted using this alternate kubernetes pod url, it produced the same logs as above.

<pod-ip-address>.<namespace>.pod.cluster.local:9092

I'm still unable to kickoff workflows when producing kafka messages. From researching other GitHub issues, the event source logs should have more output.

@wesleyscholl
Copy link
Author

After more troubleshooting using these urls: kafka-0.kafka.kafka.svc.cluster.local:9092, bootstrap.kafka.svc.cluster.local:9092 and kafka-broker-xxxxxxxxxx-xxxxx.kafka.svc.local:9092

The following command outputs other errors in the terminal.

echo "testing" | kcat -P -b localhost:9092 -t topic-22

%3|1688066849.344|FAIL|rdkafka#producer-1| [thrd:kafka-0.kafka.kafka.svc.cluster.local:9092/0]: kafka-0.kafka.kafka.svc.cluster.local:9092/0: Failed to resolve 'kafka-0.kafka.kafka.svc.cluster.local:9092': nodename nor servname provided, or not known (after 5001ms in state CONNECT)
% ERROR: Local: Host resolution failure: kafka-0.kafka.kafka.svc.cluster.local:9092/0: Failed to resolve 'kafka-0.kafka.kafka.svc.cluster.local:9092': nodename nor servname provided, or not known (after 5001ms in state CONNECT)

and

Delivery failed for message: Broker: Unknown topic or partition

For additional reference, I have another issue open with kubernetes-kafka:

Yolean/kubernetes-kafka#353

@wesleyscholl
Copy link
Author

Recent logs from the kafka-broker:

[2023-06-29 23:25:27,000] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-06-29 23:25:27,003] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-06-29 23:25:27,004] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-06-29 23:25:27,005] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-06-29 23:25:27,019] INFO Log directory /kafka/kafka-logs-kafka-broker not found, creating it. (kafka.log.LogManager)
[2023-06-29 23:25:27,035] INFO Loading logs from log dirs ArraySeq(/kafka/kafka-logs-kafka-broker) (kafka.log.LogManager)
[2023-06-29 23:25:27,055] INFO Attempting recovery for all logs in /kafka/kafka-logs-kafka-broker since no clean shutdown file was found (kafka.log.LogManager)
[2023-06-29 23:25:27,060] INFO Loaded 0 logs in 26ms. (kafka.log.LogManager)
[2023-06-29 23:25:27,063] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2023-06-29 23:25:27,065] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2023-06-29 23:25:27,825] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2023-06-29 23:25:27,828] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2023-06-29 23:25:27,879] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2023-06-29 23:25:27,909] INFO [broker-1-to-controller-send-thread]: Starting (kafka.server.BrokerToControllerRequestThread)
[2023-06-29 23:25:27,937] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-06-29 23:25:27,937] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-06-29 23:25:27,943] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-06-29 23:25:27,946] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-06-29 23:25:27,974] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2023-06-29 23:25:28,001] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)
[2023-06-29 23:25:28,020] INFO Stat of the created znode at /brokers/ids/1 is: 25,25,1688081128013,1688081128013,1,0,0,72057672725757952,208,0,25
 (kafka.zk.KafkaZkClient)
[2023-06-29 23:25:28,021] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka-broker:9092, czxid (broker epoch): 25 (kafka.zk.KafkaZkClient)
[2023-06-29 23:25:28,093] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-06-29 23:25:28,102] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-06-29 23:25:28,104] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-06-29 23:25:28,107] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2023-06-29 23:25:28,122] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener)
[2023-06-29 23:25:28,123] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2023-06-29 23:25:28,129] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2023-06-29 23:25:28,159] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2023-06-29 23:25:28,159] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2023-06-29 23:25:28,160] INFO Updated cache from existing <empty> to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache)
[2023-06-29 23:25:28,180] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2023-06-29 23:25:28,187] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2023-06-29 23:25:28,217] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-06-29 23:25:28,273] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2023-06-29 23:25:28,285] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Starting socket server acceptors and processors (kafka.network.SocketServer)
[2023-06-29 23:25:28,298] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2023-06-29 23:25:28,298] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Started socket server acceptors and processors (kafka.network.SocketServer)
[2023-06-29 23:25:28,303] INFO Kafka version: 2.8.1 (org.apache.kafka.common.utils.AppInfoParser)
[2023-06-29 23:25:28,303] INFO Kafka commitId: 839b886f9b732b15 (org.apache.kafka.common.utils.AppInfoParser)
[2023-06-29 23:25:28,303] INFO Kafka startTimeMs: 1688081128298 (org.apache.kafka.common.utils.AppInfoParser)
[2023-06-29 23:25:28,304] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
[2023-06-29 23:25:28,430] INFO [broker-1-to-controller-send-thread]: Recorded new controller, from now on will use broker kafka-broker:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2023-06-29 23:27:24,358] INFO Creating topic test with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient)
[2023-06-29 23:27:24,431] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(test-0) (kafka.server.ReplicaFetcherManager)
[2023-06-29 23:27:24,474] INFO [Log partition=test-0, dir=/kafka/kafka-logs-kafka-broker] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2023-06-29 23:27:24,477] INFO Created log for partition test-0 in /kafka/kafka-logs-kafka-broker/test-0 with properties {} (kafka.log.LogManager)
[2023-06-29 23:27:24,479] INFO [Partition test-0 broker=1] No checkpointed highwatermark is found for partition test-0 (kafka.cluster.Partition)
[2023-06-29 23:27:24,480] INFO [Partition test-0 broker=1] Log loaded for partition test-0 with initial high watermark 0 (kafka.cluster.Partition)
[2023-06-29 23:52:20,342] INFO Creating topic event with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient)
[2023-06-29 23:52:20,371] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(event-0) (kafka.server.ReplicaFetcherManager)
[2023-06-29 23:52:20,377] INFO [Log partition=event-0, dir=/kafka/kafka-logs-kafka-broker] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2023-06-29 23:52:20,378] INFO Created log for partition event-0 in /kafka/kafka-logs-kafka-broker/event-0 with properties {} (kafka.log.LogManager)
[2023-06-29 23:52:20,378] INFO [Partition event-0 broker=1] No checkpointed highwatermark is found for partition event-0 (kafka.cluster.Partition)
[2023-06-29 23:52:20,379] INFO [Partition event-0 broker=1] Log loaded for partition event-0 with initial high watermark 0 (kafka.cluster.Partition)
[2023-06-29 23:53:56,010] INFO Creating topic trigger with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient)
[2023-06-29 23:53:56,029] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(trigger-0) (kafka.server.ReplicaFetcherManager)
[2023-06-29 23:53:56,033] INFO [Log partition=trigger-0, dir=/kafka/kafka-logs-kafka-broker] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2023-06-29 23:53:56,033] INFO Created log for partition trigger-0 in /kafka/kafka-logs-kafka-broker/trigger-0 with properties {} (kafka.log.LogManager)
[2023-06-29 23:53:56,034] INFO [Partition trigger-0 broker=1] No checkpointed highwatermark is found for partition trigger-0 (kafka.cluster.Partition)
[2023-06-29 23:53:56,034] INFO [Partition trigger-0 broker=1] Log loaded for partition trigger-0 with initial high watermark 0 (kafka.cluster.Partition)
[2023-06-29 23:54:21,363] INFO Creating topic action with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient)
[2023-06-29 23:54:21,387] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(action-0) (kafka.server.ReplicaFetcherManager)
[2023-06-29 23:54:21,391] INFO [Log partition=action-0, dir=/kafka/kafka-logs-kafka-broker] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2023-06-29 23:54:21,392] INFO Created log for partition action-0 in /kafka/kafka-logs-kafka-broker/action-0 with properties {} (kafka.log.LogManager)
[2023-06-29 23:54:21,392] INFO [Partition action-0 broker=1] No checkpointed highwatermark is found for partition action-0 (kafka.cluster.Partition)
[2023-06-29 23:54:21,392] INFO [Partition action-0 broker=1] Log loaded for partition action-0 with initial high watermark 0 (kafka.cluster.Partition)

@whynowy
Copy link
Member

whynowy commented Jul 11, 2023

@wesleyscholl - sorry for the late response! What's the current status? Are you able to confirm if the messages have been published to the kafka topic successfully?

@wesleyscholl
Copy link
Author

wesleyscholl commented Jul 13, 2023

Depending which kafka instance I use, yes and no. If I run zookeeper and kafka locally, I am able to create topics, produce messages, and consume them no problem. But if I try to spin up a kubernetes pod, I keep getting this Crash Loop Backoff error. This happens for both kafka and zoopkeeper pods. So I'm unable to create topics, produce messages, or consume using the kubernetes pods.

Please see my other issue on kubernetes-kafka - Yolean/kubernetes-kafka#353

@whynowy
Copy link
Member

whynowy commented Jul 14, 2023

Depending which kafka instance I use, yes and no. If I run zookeeper and kafka locally, I am able to create topics, produce messages, and consume them no problem. But if I try to spin up a kubernetes pod, I keep getting this Crash Loop Backoff error. This happens for both kafka and zoopkeeper pods. So I'm unable to create topics, produce messages, or consume using the kubernetes pods.

Please see my other issue on kubernetes-kafka - Yolean/kubernetes-kafka#353

Okay, it seems like it's an issue with the kubenetes-kafka setup.

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had
any activity in the last 60 days. It will be closed if no further activity
occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale label Sep 12, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale
Projects
None yet
Development

No branches or pull requests

2 participants