You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 28, 2023. It is now read-only.
I know many of you already know this, but I thought I’d share a bit of info about Cloudwatch log groups and log streams that surprised me when I learned it (though it’s very obvious now of course).
A log group is essentially an index for many streams, and it’s reflected in our kibana instance as such.
A log stream is a sequence of log messages from *one* writer. The way it works is a client gets a token, posts a batch of messages and receives a new token to use for the next batch of messages. If the token used to submit logs is not the one that cloud watch expects it is an error. What this means is that streams need to be unique per thread/process that is logging.
The mistake that I made was to assume that a log_stream was sharable across many pods (say in a deployment with replicas), but this totally doesn’t work correctly since you can’t synchronize flushing of batches across pods. (Well you can, but that’d be an incredible amount of work for no good reason).
The table below lists most of the streams, groups, and counts for the past 24hours or so. It looks like many applications are probably misconfigured w/r/t log_stream.
Describe the bug
Background email thread:
https://github.com/quipucords/yupana/blob/master/yupana/config/settings/base.py#L141
Expected behavior
The log stream should be different for each pod (use the pod name).
The text was updated successfully, but these errors were encountered: