KRaft controller pods restart in loop when having 3 controllers, but work fine with 1 controller #11369
Replies: 1 comment
-
It was just a memory leak |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
KRaft controller pods restart in loop when having 3 controllers, but work fine with 1 controller
Description
I'm experiencing an issue with KRaft controller pods in my Strimzi Kafka cluster. When I configure the cluster with 3 controller pods, they continuously restart in a loop. However, when I use the exact same configuration but with only 1 controller pod, everything works perfectly fine with no restarts.
The issue happens during regular operation, not just during initial deployment. The restarts happen sequentially, so the cluster remains operational, but it's concerning behavior that shouldn't be happening in a properly configured cluster.
Environment
Configuration
I'm using NodePools with the KRaft mode enabled. Here's my controller NodePool configuration:
Events
Troubleshooting Steps Taken
Verified that the broker pods are working fine - they don't have restart issues
Checked controller logs - no obvious errors explaining the restarts
Tried with a single controller - which works perfectly
Questions
Is this a known issue with multi-controller KRaft setup in Strimzi 0.45.0?
Could there be a problem with the quorum election or leader selection in the 3-controller setup?
Are there specific probe settings that need to be adjusted for a multi-controller setup?
Should I increase terminationGracePeriodSeconds or probe delays in controller pods?
Any help or guidance would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions