-
Notifications
You must be signed in to change notification settings - Fork 25.3k
[9.0] New threadpool-based merge scheduler which is disk space aware #129134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
albertzaharovits
merged 14 commits into
elastic:9.0
from
albertzaharovits:backport-90-disk-space-aware-threadpool-merge-scheduler
Jun 9, 2025
Merged
[9.0] New threadpool-based merge scheduler which is disk space aware #129134
albertzaharovits
merged 14 commits into
elastic:9.0
from
albertzaharovits:backport-90-disk-space-aware-threadpool-merge-scheduler
Jun 9, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This adds a new merge scheduler implementation that uses a (new) dedicated thread pool to run the merges. This way the number of concurrent merges is limited to the number of threads in the pool (i.e. the number of allocated processors to the ES JVM). It implements dynamic IO throttling (the same target IO rate for all merges, roughly, with caveats) that's adjusted based on the number of currently active (queued + running) merges. Smaller merges are always preferred to larger ones, irrespective of the index shard that they're coming from. The implementation also supports the per-shard "max thread count" and "max merge count" settings, the later being used today for indexing throttling. Note that IO throttling, max merge count, and max thread count work similarly, but not identical, to their siblings in the ConcurrentMergeScheduler. The per-shard merge statistics are not affected, and the thread-pool statistics should reflect the merge ones (i.e. the completed thread pool stats reflects the total number of merges, across shards, per node).
…ep up with the merge load (elastic#125654) Fixes an issue where indexing throttling kicks in while disk IO is throttling. Instead disk IO should first unthrottle, and only then, if we still can't keep up with the merging load, start throttling indexing. Fixes elastic/elasticsearch-benchmarks#2437 Relates elastic#120869
The intent here is to aim for fewer to-do merges enqueued for execution, and to unthrottle disk IO at a faster rate when the queue grows longer. Overall this results in less merge disk throttling. Relates elastic/elasticsearch-benchmarks#2437 elastic#120869
…nCatchesUp (elastic#125956) We don't know how many semaphore merge permits we need to release, or how many are already released. Fixes elastic#125744
Ensures proper cleanup in the testThrottleStats test. Fixes elastic#125910 elastic#125907 elastic#125912
…27613) This PR introduces 3 new settings: indices.merge.disk.check_interval, indices.merge.disk.watermark.high, and indices.merge.disk.watermark.high.max_headroom that control if the threadpool merge executor starts executing new merges when the disk space is getting low. The intent of this change is to avoid the situation where in-progress merges exhaust the available disk space on the node's local filesystem. To this end, the thread pool merge executor periodically monitors the available disk space, as well as the current disk space estimates required by all in-progress (currently running) merges on the node, and will NOT schedule any new merges if the disk space is getting low (by default below the 5% limit of the total disk space, or 100 GB, whichever is smaller (same as the disk allocation flood stage level)).
Relates to an effort to consolidate the stateless merge scheduler with the current (stateful) merge scheduler from main ES. This PR brings over features required to maintain parity with the stateless scheduler. Specifically, a few methods are added for the stateless scheduler to override: Adds an overridable method shouldSkipMerge to test for skipping merges Adds 2 additional lifecycle callbacks to the scheduler for when a merge is enqueued and when a merge is executed or aborted. This is used by stateless to track active + queued merges per-shard Adds overridable methods for enabling/disabling IO/thread/merge count throttling Other functionality required by the stateless merge scheduler can use the existing callbacks from the stateful scheduler: beforeMerge can be overridden to prewarm afterMerge can be overridden to refresh after big merges Relates ES-10264 --------- Co-authored-by: elasticsearchmachine <infra-root+elasticsearchmachine@elastic.co>
Relates ES-10961
albertzaharovits
added a commit
to albertzaharovits/elasticsearch
that referenced
this pull request
Jun 9, 2025
…lastic#129134) This is the backport of a few PRs related to the implementation of the new threadpool-based merge scheduler. The new merge scheduler uses a node-level threadpool (sized the number of CPU cores) to execute all the merges across all the shards on the node, limiting the amount of concurrently executing merges, irrespective of the number of shards that the node hosts. Smaller merges continue to have priority over larger ones. In addition, the new merge scheduler implementation also monitors the available disk space on the node, so that it won't start executing any new merges when the available disk space becomes scarce (the used disk space gets above the indices.merge.disk.watermark.high (95%) limit (same as the the allocation flood stage (the limit that flips shards on the node to read-only))). The new merge scheduler is now enabled by default (indices.merge.scheduler.use_thread_pool is true).
albertzaharovits
added a commit
to albertzaharovits/elasticsearch
that referenced
this pull request
Jun 9, 2025
…lastic#129134) This is the backport of a few PRs related to the implementation of the new threadpool-based merge scheduler. The new merge scheduler uses a node-level threadpool (sized the number of CPU cores) to execute all the merges across all the shards on the node, limiting the amount of concurrently executing merges, irrespective of the number of shards that the node hosts. Smaller merges continue to have priority over larger ones. In addition, the new merge scheduler implementation also monitors the available disk space on the node, so that it won't start executing any new merges when the available disk space becomes scarce (the used disk space gets above the indices.merge.disk.watermark.high (95%) limit (same as the the allocation flood stage (the limit that flips shards on the node to read-only))). The new merge scheduler is now enabled by default (indices.merge.scheduler.use_thread_pool is true).
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
backport
:Distributed Indexing/Distributed
A catch all label for anything in the Distributed Indexing Area. Please avoid if you can.
>feature
v9.0.3
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is the backport of a few PRs related to the implementation of the new threadpool-based merge scheduler.
The new merge scheduler uses a node-level threadpool (sized the number of CPU cores) to execute all the merges across all the shards on the node, limiting the amount of concurrently executing merges, irrespective of the number of shards that the node hosts. Smaller merges continue to have priority over larger ones.
In addition, the new merge scheduler implementation also monitors the available disk space on the node, so that it won't start executing any new merges when the available disk space becomes scarce (the used disk space gets above the
indices.merge.disk.watermark.high
(95%) limit (same as the the allocation flood stage (the limit that flips shards on the node to read-only))).The new merge scheduler is now enabled by default (
indices.merge.scheduler.use_thread_pool
istrue
).Here is the complete list of backported PRs:
See also: #129152
Relates: ES-11701 ES-10046