-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include logs from previous Pod incarnations in diagnostics #42
Comments
I did not realise this at the time but only found out about this now: the We almost never have the case were we have a single container inside a Pod restarting (unless there is something wrong in which case the previous container logs might be interesting). What we constantly have is Pods being deleting in order to roll out upgrades in a cluster. These logs would be interesting because the show for example what the Elasticsearch cluster was doing before an upgrade. However we have not means of retrieving the logs of already deleted Pods (even if a new Pod with the exact same name was created right after) |
Thanks. But in either case, not a ECK diagnostics can do here so please let me close this ticket. Thanks! |
👋🏽 @kunisen @pebrc , may I confirm if this FR would answer a common high severity situation: Users may experience ES bootlooping errors where Elasticsearch is cyclically exiting and the operator logs only report Secondly, Support internal KB |
@stefnestor I think you are right that for a repeatedly restarting container the previous logs might be helpful because the diagnostic tool might be running at a time when the container is currently not running due to backoff. |
Sorry for too many requests but do we have the way to implement things like previous logs?
Sometimes ES nodes just left and we can't take the logs by
kubectl logs
but we have to add--previous
separately.Originally posted by @kunisen in #28 (comment)
The text was updated successfully, but these errors were encountered: