-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(event cache): wait for the initial previous-batch token, when storage's enabled #4724
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #4724 +/- ##
==========================================
- Coverage 86.13% 86.13% -0.01%
==========================================
Files 291 291
Lines 34300 34304 +4
==========================================
+ Hits 29546 29549 +3
- Misses 4754 4755 +1 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the test. I don't understand one thing but it's documented and tested, I approve this PR.
self.propagate_changes().await?; | ||
|
||
// If we've never waited for an initial previous-batch token, and we now have at |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it here? I feel like this is not the correct place, but it's a gut feeling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a central location where to do it, since we'd need to do it every time after the linked chunk has been modified / after a gap has been potentially added. That happens during sync and when resolving a gap, so… here's likely the right place.
I'm tempting to rethink the waiting for a prev-batch token mechanism, because 1. it causes lot of code everywhere, 2. it's likely used not very often.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the explanation. Makes sense to me.
14dc0ed
to
aa2ed02
Compare
aa2ed02
to
9c1e119
Compare
If during back-pagination, we lazy-load a chunk from storage and realize there's no previous chunk, we might think we've reached the start of the timeline and that we're done. This is not true, if this was the first default chunk, and we never waited long enough to get the initial gap from sync. This patch fixes that, and includes a regression test showing the error.
Part of #3280.