Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include logs from previous Pod incarnations in diagnostics #42

Open
pebrc opened this issue Sep 8, 2021 · 4 comments
Open

Include logs from previous Pod incarnations in diagnostics #42

pebrc opened this issue Sep 8, 2021 · 4 comments
Assignees

Comments

@pebrc
Copy link
Collaborator

pebrc commented Sep 8, 2021

Sorry for too many requests but do we have the way to implement things like previous logs?

kubectl logs <pod name> --previous

Sometimes ES nodes just left and we can't take the logs by kubectl logs but we have to add --previous separately.

Originally posted by @kunisen in #28 (comment)

@pebrc pebrc self-assigned this Nov 5, 2021
@pebrc
Copy link
Collaborator Author

pebrc commented Nov 5, 2021

I did not realise this at the time but only found out about this now: the --previous flag in kubectl logs does not what I thought it would do. It does not stream the logs of previous incarnations of a given Pod but it would give you the logs of previous instances of a container in a Pod.

We almost never have the case were we have a single container inside a Pod restarting (unless there is something wrong in which case the previous container logs might be interesting). What we constantly have is Pods being deleting in order to roll out upgrades in a cluster. These logs would be interesting because the show for example what the Elasticsearch cluster was doing before an upgrade. However we have not means of retrieving the logs of already deleted Pods (even if a new Pod with the exact same name was created right after)

@kunisen
Copy link

kunisen commented Nov 10, 2021

However we have not means of retrieving the logs of already deleted Pods (even if a new Pod with the exact same name was created right after)

Thanks.
I think we may need to login to VM and grab the logs for those previously run pods. (not sure if the logs will be deleted or not)
Maybe the only way is to enable sidecar logging and use filebeat to send it to a central place for troubleshooting purpose.

But in either case, not a ECK diagnostics can do here so please let me close this ticket.

Thanks!

@kunisen kunisen closed this as completed Nov 10, 2021
@stefnestor
Copy link

stefnestor commented Feb 12, 2025

👋🏽 @kunisen @pebrc , may I confirm if this FR would answer a common high severity situation:

Users may experience ES bootlooping errors where Elasticsearch is cyclically exiting and the operator logs only report Elasticsearch cannot be reached yet, re-queuing. In this situation, Support asks user to send the pod's previous logs (ex online doc which uses --previous) in hopes they can gather the failed start-up logs. I may be misunderstanding, but this appears to be helpful when pods are listed in PENDING with READY like 0/1.

Secondly, Support internal KB id: 05f99124 lists this ballpark helpful when troubleshooting OOM, FWIW.

@pebrc
Copy link
Collaborator Author

pebrc commented Feb 13, 2025

@stefnestor I think you are right that for a repeatedly restarting container the previous logs might be helpful because the diagnostic tool might be running at a time when the container is currently not running due to backoff.

@stefnestor stefnestor reopened this Feb 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants