Example of three Kafka brokers depending on three Zookeeper instances base on GCP .
To get consistent service DNS names kafka-N.broker.kafka
(.svc.cluster.local
), run everything in a namespace:
kubectl create -f namespace.yml
You may add storage class to the kafka StatefulSet declaration to enable automatic volume provisioning.
Alternatively create PVs and PVCs manually. For example in Minikube.
kubectl create -f gce-volume-claims/gce-storage-class.yaml
There is a Zookeeper+StatefulSet blog post and example, but it appears tuned for workloads heavier than Kafka topic metadata.
The Kafka book (Definitive Guide, O'Reilly 2016) recommends that Kafka has its own Zookeeper cluster, so we use the official docker image but with a startup script change to guess node id from hostname.
If you lose your zookeeper cluster, kafka will be unaware that persisted topics exist.
The data is still there, but you need to re-create topics. So , we run in StatefulSet
.
Zookeeper runs as a StatefulSet with persistent storage:
kubectl create -f zookeeper/zoo-headless-svc.yml
kubectl create -f zookeeper/zoo-svc.yml
kubectl create -f zookeeper/zoo-stateful.yml
Assuming you have your PVCs Bound
, or enabled automatic provisioning (see above), go ahead and:
kubectl create -f kafka/broker-headless-svc.yml
kubectl create -f kafka/kafka-svc.yml
kubectl create -f kafka/kafka-stateful.yml
You might want to verify in logs that Kafka found its own DNS name(s) correctly. Look for records like:
kubectl --namespace=kafka logs kafka-0 | grep "Registered broker"
# INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(kafka-0.broker.kafka.svc.cluster.local,9092,PLAINTEXT)
As the image is too old , you should add the cluster manually .
Cluster Zookeeper Hosts = zookeeper:2181
kubectl create -f kafka-manager/kafka-manager-svc.yml
kubectl create -f kafka-manager/kafka-manager.yml
This is WIP, but topic creation has been automated. Note that as a Job, it will restart if the command fails, including if the topic exists :(
kubectl create -f test/11topic-create-test1.yml
Pods that keep consuming messages (but they won't exit on cluster failures)
kubectl create -f test/21consumer-test1.yml
Testing and retesting... delete the namespace. PVs are outside namespaces so delete them manually.
kubectl delete namespace kafka
- As the Kafka runs in Kubernetes , the borker-list is just like below, so it may not provide external services .
2017/05/16 19:45:58 client/brokers registered new broker #2 at kafka-2.broker.kafka.svc.cluster.local:9092
2017/05/16 19:45:58 client/brokers registered new broker #1 at kafka-1.broker.kafka.svc.cluster.local:9092
2017/05/16 19:45:58 client/brokers registered new broker #0 at kafka-0.broker.kafka.svc.cluster.local:9092