Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

observability into refreshes #187

Closed
nicks opened this issue Jul 31, 2024 · 3 comments · Fixed by #190
Closed

observability into refreshes #187

nicks opened this issue Jul 31, 2024 · 3 comments · Fixed by #190
Assignees
Labels
enhancement New feature or request

Comments

@nicks
Copy link

nicks commented Jul 31, 2024

Describe the feature request

is there a way to get observability into how the proxy is refreshing?

Background

we had an incident a month ago where we thought it might be related to refreshing, and there wasn't an easy way to tell from the outside when the last time the proxy had refreshed its data

Solution suggestions

Logs would be fine (there are very few logs right now)

A prometheus-style metrics endpoint would also work.

@nicks nicks added the enhancement New feature or request label Jul 31, 2024
@chriswk chriswk moved this from New to In Progress in Issues and PRs Aug 1, 2024
@chriswk
Copy link
Member

chriswk commented Aug 1, 2024

Hi @nicks . Thank you for your report.

By default, the proxy is setup to log errors if any happen, so if you're seeing a quiet log, it should mean you're not having a refreshing issue.

From our client constructor

        this.metrics.on('error', (msg) => this.logger.error(`metrics: ${msg}`));
        this.unleash.on('error', (msg) => this.logger.error(msg));

We could add two prometheus gauges, last_features_check_epoch_seconds and last_features_update_epoch_seconds to trigger on the events emitted by the UnleashClient when it does its poll cycle.

Unless you're using custom strategies or context enrichers, we do recommend swapping to Unleash Edge (https://github.com/unleash/unleash-edge) where we have an endpoint allowing you to see the status of all client keys being used.

chriswk added a commit that referenced this issue Aug 1, 2024
Reported in #187, this PR adds prometheus and creates two gauges, one
for keeping track of last update (an actual refresh of feature toggles)
as well as last call to upstream Unleash.

sidenote: moved to biome and copied Unleash/unleash config to fall more
in line, should've been a separate PR, but once I got stuck into adding
prometheus, I couldn't help myself, I had to boyscout
chriswk added a commit that referenced this issue Aug 1, 2024
Adds prometheus gauges for when we last fetched (no matter if 200 or
304) and when we last updated (only updated when status==200)

Fixes: #187
@chriswk chriswk closed this as completed in 2615e72 Aug 1, 2024
@github-project-automation github-project-automation bot moved this from In Progress to Done in Issues and PRs Aug 1, 2024
@chriswk
Copy link
Member

chriswk commented Aug 1, 2024

Added and released as v1.4.5 of the proxy, which should be available in the coming 20 minutes or so.

@chriswk
Copy link
Member

chriswk commented Aug 1, 2024

Copying from the README what we added.

Prometheus endpoint

The proxy has a prometheus metrics endpoint available at http://localhost:3000/proxy/internal-backstage/prometheus With the following metrics

    unleash_proxy_up a [counter](https://prometheus.io/docs/concepts/metric_types/#counter) which is set to 1 when proxy is running
    last_metrics_update_epoch_timestamp_ms a [gauge](https://prometheus.io/docs/concepts/metric_types/#gauge) set to the epoch timestamp in ms when the proxy last received a feature update
    last_metrics_fetch_epoch_timestamp_ms a [gauge](https://prometheus.io/docs/concepts/metric_types/#gauge) set to the epoch timestamp in ms when the proxy last checked for updates

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants