-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement: persist commit index in LogStore to accelerate recovery #613
base: main
Are you sure you want to change the base?
Enhancement: persist commit index in LogStore to accelerate recovery #613
Conversation
Proposal for next PR: Lines 1292 to 1359 in 42d3446
func (r *Raft) processLogs(index uint64, futures map[uint64]*logFuture) {
// Reject logs we've applied already
lastApplied := r.getLastApplied()
if index <= lastApplied {
r.logger.Warn("skipping application of old log", "index", index)
return
}
+ if r.fastRecovery && isCommitTrackingLogStore(r.logs) {
+ store := r.logs.(CommitTrackingLogStore)
+ if err = store.SetCommitIndex(index) {
+ // show some error msg
+ }
+ }
....
// Update the lastApplied index and term
r.setLastApplied(index)
} |
As a long-term user of this library, this could be useful. However I would strongly recommend that this functionality be wrapped in a flag, which is settable in the Raft Config object (similar to |
@otoolep Thanks for the suggestion, I would add |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for starting on this @lalalalatt.
The interface looks good. I have a comment inline and given that I'm suggesting removing half the lines in this PR, it might make sense to also add the change you proposed for your "next PR" into this one?
I also think it might be worth rebasing this PR on to a feature branch like f/fast-recovery
so we can keep PRs small to review but not merge code into main
until it's a working feature that can be released. (I realise this would be a no-op, but still if we don't complete the feature it will be unused code that will need later cleanup or completion.)
What do you think of that approach?
@banks Sure, that sounds like a good plan. Could you help me create the |
Drive-by comment.
That's not the right way to think about the development flow (not unless this repo is managed in some unusual way). You create the branch in your own fork of this repo, and then generate a PR from that branch in your fork to the main branch in this repo. |
@otoolep in general that is usually how GH works. In this case I wonder if we should have a long running branch here so that we can keep PRs small and review the changes in small parts rather than wait until the entire feature is built and have to review it all in one huge PR from the fork to here. One way to do that would be for me to create a long-lived feature branch here, another would be for us to review PRs in a fork but that leaves all the interim review in a separate place 🤔 . On reflection. I'm not sure what is gained by trying to make this many small PRs yet. Let's continue work here and see how large this PR gets as more of the code is built before we worry too much about making feature branches. Sorry for the suggestion that wasn't very well thought through! |
Ok, looks good! |
- Introduced a `fastRecovery` flag in the Raft structure and configuration to enable fast recovery mode. - Updated `NewRaft` to initialize `fastRecovery` from the configuration. - Added `persistCommitIndex` function to store the commit index when fast recovery is enabled. - Modified `processLogs` to persist the commit index before updating `lastApplied`. - Documented the `FastRecovery` option in the config.
- Implemented `recoverFromCommitedLogs` function to recover the Raft node from committed logs. - If `fastRecovery` is enabled and the log store implements `CommitTrackingLogStore`, the commit index is read from the store, avoiding the need to replay logs. - Logs between the last applied and commit index are fed into the FSM for faster recovery.
…ency - Refactor `ReadCommitIndex` to `GetCommitIndex` across `LogStore` and `InmemCommitTrackingStore`. - Introduce `InmemCommitTrackingStore` to track commit index in memory for testing purposes. - Add locking mechanism to safely read/write commit index in `InmemCommitTrackingStore`.
CommitTrackingLogStore
and its checker in LogStoreThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @lalalalatt, thanks for working on this.
I spotted a couple of things that we should tweak here. I've added comments inline - let me know if anything is confusing. I didn't do a "full" final review yet but these should get it more inline with my original design. Feel free to explain if I'm misunderstanding anything though!
@banks Thanks for these lots of comments 😄 But, for the current change, it is possible to apply same configurations twice, although its wouldn't violate the correctness, it still be wasteful. |
I don't think storing commit index every time before For followers, when they are appending the log received from leader, the commit index may not increase, because leader may not receive majority agreement on the corresponding logs. For leader, as I mention previously, it persist logs by calling My idea is: the commit index only update at two place:
They both trigger Maybe that's my misunderstanding 😅, if there is anything wrong, please correct me~ |
Hi @peterxcli The idea behind persisting commit index during StoreLogs is to make sure it doesn't cause worse performance. Writing to disk is the slowest part of raft in most implementations. If we add a new write to disk anywhere then performance will decrease and that's not acceptable for our products at least and probably not to other users of this library. So the whole design was intended to persist the current known commit index along with the logs every time we write new logs - yes it's a few extra bytes but it's the
This is true, but the If we only call this during
You're correct that all the logs we store when we call Again this is so that we can "re-use" the fsync on the Most of the same arguments as above apply here: if we wait until As an aside, it we were OK with adding more disk writes, I'd not have proposed this as an extension of Does that make sense? |
@banks oh~~ It seems like I completely misunderstood your design.
Got it, thanks~ 😄
That's true 😅 @banks Thanks for your clarification very much~ |
…ntil the next StoreLogs call and then write it out on disk along with the log data.
- Update makeCluster to return (*cluster, error) - Modify MakeCluster, MakeClusterNoBootstrap, and MakeClusterCustom to return error - Update all test cases to handle potential errors from cluster creation - Replace t.Fatalf() calls with t.Logf() and error returns in makeCluster BREAKING CHANGE: MakeCluster, MakeClusterNoBootstrap, and MakeClusterCustom now return an additional error value, which needs to be handled in existing tests.
To support error assertion for Document fix commits: Error mode testing: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎉
} | ||
|
||
// makeCluster will return a cluster with the given config and number of peers. | ||
// If bootstrap is true, the servers will know about each other before starting, | ||
// otherwise their transports will be wired up but they won't yet have configured | ||
// each other. | ||
func makeCluster(t *testing.T, opts *MakeClusterOpts) *cluster { | ||
func makeCluster(t *testing.T, opts *MakeClusterOpts) (*cluster, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAICT this was added to support 1 test that asserts a specific error is returned. Since this causes tons of other test churn I have a slight preference for reverting the error return here and special casing the 1 test that needs to check the error return of NewRaft. Not a blocker though.
This reverts commit cfffcb5.
- Add PropagateError option to MakeClusterOpts - Update makeCluster to return (*cluster, error) - Modify MakeCluster, MakeClusterNo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you again for the prompt update @lalalalatt!
I left a minor suggestion for a comment, other then that I think from testing perspective we can add more tests that cover the following cases:
- the logstore return 0, nil all the time (no commit index is in the store).
- GetCommitIndex return a value that is bigger then the last index
log.go
Outdated
|
||
// GetCommitIndex returns the latest persisted commit index from the latest log entry | ||
// in the store at startup. | ||
// It is ok to return a value higher than the last index in the log (But it should never happen). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// It is ok to return a value higher than the last index in the log (But it should never happen). | |
// GetCommitIndex should not return a value higher than the last index in the log. If that happens, the last index in the log will be used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should also document here that GetCommitIndex need to return 0,nil when no commit index is found in the log.
…face - Specify that GetCommitIndex should not return a value higher than the last index in the log - Clarify that if a higher value is returned, the last index in the log will be used instead - Add instruction to return (0, nil) when no commit index is found in the log store
Hi, @banks, any new progress on this? |
Thanks @lalalalatt and sorry it's been a while on this one, time passes! I think @dhiaayachi and @schmichael and @tgross did a great job following up after my initial review so I've dismissed my concerns which seems to be outdated now. I've not had a chance to do another thorough review yet so I'll not approve right now but I've unblocked the merge if one of the others was already at a point ready to approve. |
@peterxcli thanks for the links. I think their approach is essentially the same as this one - note that this is why I pushed this design towards Their specific approach of have an actual log record type for it instead of having |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've had a chance to re-review and I'm 👍 on merging, once the merge conflicts have been resolved.
@tgross Resolved, PTAL. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This is just a test of hashicorp/raft#613 and not intended for merging as-is. Since there is no corresponding raft-boltdb implementation, it's not possible to functionally test this change as-is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for sticking with this @lalalalatt and everyone else. I'm +1 on merging.
A note to others: there's currently no https://github.com/hashicorp/raft-boltdb (AFAICT), so it's not possible to functionally test this in products like HashiCorp Nomad.
That being said, I'm fine with merging first so we can treat the API contract as finalized.
#549