-
-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] svclb-traefik* won't start after host crash and restart. #1021
Comments
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 58m (x391 over 138m) kubelet Back-off restarting failed container
Normal SandboxChanged 47m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Started 46m (x2 over 47m) kubelet Started container lb-port-80
Normal Pulled 46m (x2 over 47m) kubelet Container image "rancher/klipper-lb:v0.3.4" already present on machine
Normal Created 46m (x2 over 47m) kubelet Created container lb-port-443
Normal Started 46m (x2 over 47m) kubelet Started container lb-port-443
Warning BackOff 46m (x5 over 47m) kubelet Back-off restarting failed container
Normal Pulled 46m (x3 over 47m) kubelet Container image "rancher/klipper-lb:v0.3.4" already present on machine
Normal Created 46m (x3 over 47m) kubelet Created container lb-port-80
Warning BackOff 22m (x125 over 47m) kubelet Back-off restarting failed container
Normal SandboxChanged 17m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Started 16m (x2 over 17m) kubelet Started container lb-port-80
Normal Pulled 16m (x2 over 17m) kubelet Container image "rancher/klipper-lb:v0.3.4" already present on machine
Normal Created 16m (x2 over 17m) kubelet Created container lb-port-443
Normal Started 16m (x2 over 17m) kubelet Started container lb-port-443
Warning BackOff 16m (x5 over 17m) kubelet Back-off restarting failed container
Normal Pulled 16m (x3 over 17m) kubelet Container image "rancher/klipper-lb:v0.3.4" already present on machine
Normal Created 16m (x3 over 17m) kubelet Created container lb-port-80
Warning BackOff 119s (x78 over 17m) kubelet Back-off restarting failed container |
There is definitely something wrong, however the problem went away. I deleted the cluster and recreated one. The error was still there. Then I restarted the VM and the error is again there for about 10 minutes. Then after that everything is fine again. What the heck?! :-) |
Hi @bayeslearner , thanks for opening this issue and providing all the information. Sorry I cannot help you there.. maybe jump on the rancher-users Slack and drop a question in e.g. the #k3s channel. If you think that there's anything we can do on k3d's side, feel free to reopen this issue 👍 |
It did come back. This seems to be caused by an application trying to start another svclb-yyy in the same docker container. But the issue is that I have no clue how to recover from this error after removing the other application. |
@bayeslearner so you have multiple svclb pods trying to map the same port and then when you delete one of them, the other still doesn't work? |
I had same issue. I am using Rocky Linux.
And following patch works for me. |
Getting the same issue, running:
A deeper dive into the logs for one of the pods shows:
My host's iptable version is Any idea as to what's going on? The hack above doesn't seem to work when using rancher/klipper:v0.4.3. Completely stumped. |
After a bit of digging, I've found the problem lies in the entry script for the The problem lies in the Here's a fix for iptables_nft host operating systems: FILE: entry
FILE: Dockerfile
Run |
What did you do
How was the cluster created?
What did you do afterwards?
My host crashed and after restarting it and restarting k3d, I am no longer able to connect to any app service through ingress.
What did you expect to happen
Ingress should work
Screenshots or terminal output
Which OS & Architecture
Linux rockylinux8.linuxvmimages.local 4.18.0-348.20.1.el8_5.x86_64 #1 SMP Thu Mar 10 20:59:28 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Which version of
k3d
k3d version
Which version of docker
docker version
anddocker info
[rockylinux@rockylinux8 infra_k3d]$ docker info
The text was updated successfully, but these errors were encountered: