-
Notifications
You must be signed in to change notification settings - Fork 733
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Readiness probe failed for kafka #279
Comments
Looks like two kafka pods succeed and one fails. It could be 463e1c7 though that would be strange because there are 5 zookeeper pods to reach for 3 kafka brokers. Does everything but kafka-2 stay ready or is there other events? Do zookeeper services have the expected endpoints? Please use ``` when you post command ouput. Makes it a lot more readable. See https://guides.github.com/features/mastering-markdown/ |
i changed zookeeper config to
|
I'm puzzled. At this point I can't come up with a single hypothesis to test. Something might come to mind later, but my only advice now is to dig around and do different experiments that involve killing pods. Edit: zookeeper logs could possibly provide clues. |
@solsson I also reported the same error.When I modify kafka and zk namespace Other namespace 。initing kafka init-config reported error:
if The namespace is kafka, the cluster init is normal and the connection to zk is normal.But this is not what I want, my project is in other namespaces。
then,reported The above error: [2019-06-26 05:52:11,975] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) |
@amateu It looks like yours is a custom setup with ExternalName for zookeeper. Why don't you edit With @selkabli's issue what is most interesting is that only kafka-2 fails. I think in your setup @amateu all brokers will fail. |
@solsson ,yes,it's all brokers will fail.The reason is really caused by rbac, I tried to create a rbac on my project to deploy zk and kafka instead of namespace is kafka. But still the connection zk timeout。 |
@solsson the problem happen only on node1 whish is the master of my cluster any clues why ? the taint is already removed from master so it's not related to taint |
That's an important observation. I haven't tried running on a mastter. I have no clue why the zookeeper connection would fail from there. |
having the same issue as @selkabli, I am deploying on bear-metal k8s cluster with local persistent volume. 1 broker (out of 3) always failed to start correctly. |
nvm, seems the pv on one of the node having problem which cause this. I changed the pv to another node, it works fine. |
Hi,
this is my first time using kafka so maybe i'm messing somthing can you please help
The text was updated successfully, but these errors were encountered: