diff options
| author | Coly Li <colyli@fnnas.com> | 2025-11-13 13:36:26 +0800 |
|---|---|---|
| committer | Jens Axboe <axboe@kernel.dk> | 2025-11-13 09:18:06 -0700 |
| commit | 70bc173ce06be90b026bb00ea175567c91f006e4 (patch) | |
| tree | d350e8e65c30c7780742f4ba84167292ca82a3d9 /drivers/md/bcache/alloc.c | |
| parent | 7bf90cd740bf87dd1692cf74d49bb1dc849dcd11 (diff) | |
bcache: reduce gc latency by processing less nodes and sleep less time
When bcache device is busy for high I/O loads, there are two methods to
reduce the garbage collection latency,
- Process less nodes in eac loop of incremental garbage collection in
btree_gc_recurse().
- Sleep less time between two full garbage collection in
bch_btree_gc().
This patch introduces to hleper routines to provide different garbage
collection nodes number and sleep intervel time.
- btree_gc_min_nodes()
If there is no front end I/O, return 128 nodes to process in each
incremental loop, otherwise only 10 nodes are returned. Then front I/O
is able to access the btree earlier.
- btree_gc_sleep_ms()
If there is no synchronized wait for bucket allocation, sleep 100 ms
between two incremental GC loop. Othersize only sleep 10 ms before
incremental GC loop. Then a faster GC may provide available buckets
earlier, to avoid most of bcache working threads from being starved by
buckets allocation.
The idea is inspired by works from Mingzhe Zou and Robert Pang, but much
simpler and the expected behavior is more predictable.
Signed-off-by: Coly Li <colyli@fnnas.com>
Signed-off-by: Robert Pang <robertpang@google.com>
Signed-off-by: Mingzhe Zou <mingzhe.zou@easystack.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'drivers/md/bcache/alloc.c')
| -rw-r--r-- | drivers/md/bcache/alloc.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c index db3684819e38..7708d92df23e 100644 --- a/drivers/md/bcache/alloc.c +++ b/drivers/md/bcache/alloc.c @@ -399,7 +399,11 @@ long bch_bucket_alloc(struct cache *ca, unsigned int reserve, bool wait) TASK_UNINTERRUPTIBLE); mutex_unlock(&ca->set->bucket_lock); + + atomic_inc(&ca->set->bucket_wait_cnt); schedule(); + atomic_dec(&ca->set->bucket_wait_cnt); + mutex_lock(&ca->set->bucket_lock); } while (!fifo_pop(&ca->free[RESERVE_NONE], r) && !fifo_pop(&ca->free[reserve], r)); |