diff options
| author | Eric Biggers <ebiggers@kernel.org> | 2025-11-15 15:08:16 -0800 |
|---|---|---|
| committer | Herbert Xu <herbert@gondor.apana.org.au> | 2025-11-22 10:04:50 +0800 |
| commit | 4dffc9bbffb9ccfcda730d899c97c553599e7ca8 (patch) | |
| tree | 7bb0c52032be9bd5475ad11b1e0d3f9e9da820e5 /include | |
| parent | 5727a844a3f79b835470aff44a39f6d970ec45dd (diff) | |
crypto: scatterwalk - Fix memcpy_sglist() to always succeed
The original implementation of memcpy_sglist() was broken because it
didn't handle scatterlists that describe exactly the same memory, which
is a case that many callers rely on. The current implementation is
broken too because it calls the skcipher_walk functions which can fail.
It ignores any errors from those functions.
Fix it by replacing it with a new implementation written from scratch.
It always succeeds. It's also a bit faster, since it avoids the
overhead of skcipher_walk. skcipher_walk includes a lot of
functionality (such as alignmask handling) that's irrelevant here.
Reported-by: Colin Ian King <coking@nvidia.com>
Closes: https://lore.kernel.org/r/20251114122620.111623-1-coking@nvidia.com
Fixes: 131bdceca1f0 ("crypto: scatterwalk - Add memcpy_sglist")
Fixes: 0f8d42bf128d ("crypto: scatterwalk - Move skcipher walk and use it for memcpy_sglist")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Diffstat (limited to 'include')
| -rw-r--r-- | include/crypto/scatterwalk.h | 52 |
1 files changed, 31 insertions, 21 deletions
diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 83d14376ff2b..f485454e3955 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -227,6 +227,34 @@ static inline void scatterwalk_done_src(struct scatter_walk *walk, scatterwalk_advance(walk, nbytes); } +/* + * Flush the dcache of any pages that overlap the region + * [offset, offset + nbytes) relative to base_page. + * + * This should be called only when ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, to ensure + * that all relevant code (including the call to sg_page() in the caller, if + * applicable) gets fully optimized out when !ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE. + */ +static inline void __scatterwalk_flush_dcache_pages(struct page *base_page, + unsigned int offset, + unsigned int nbytes) +{ + unsigned int num_pages; + + base_page += offset / PAGE_SIZE; + offset %= PAGE_SIZE; + + /* + * This is an overflow-safe version of + * num_pages = DIV_ROUND_UP(offset + nbytes, PAGE_SIZE). + */ + num_pages = nbytes / PAGE_SIZE; + num_pages += DIV_ROUND_UP(offset + (nbytes % PAGE_SIZE), PAGE_SIZE); + + for (unsigned int i = 0; i < num_pages; i++) + flush_dcache_page(base_page + i); +} + /** * scatterwalk_done_dst() - Finish one step of a walk of destination scatterlist * @walk: the scatter_walk @@ -240,27 +268,9 @@ static inline void scatterwalk_done_dst(struct scatter_walk *walk, unsigned int nbytes) { scatterwalk_unmap(walk); - /* - * Explicitly check ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE instead of just - * relying on flush_dcache_page() being a no-op when not implemented, - * since otherwise the BUG_ON in sg_page() does not get optimized out. - * This also avoids having to consider whether the loop would get - * reliably optimized out or not. - */ - if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) { - struct page *base_page; - unsigned int offset; - int start, end, i; - - base_page = sg_page(walk->sg); - offset = walk->offset; - start = offset >> PAGE_SHIFT; - end = start + (nbytes >> PAGE_SHIFT); - end += (offset_in_page(offset) + offset_in_page(nbytes) + - PAGE_SIZE - 1) >> PAGE_SHIFT; - for (i = start; i < end; i++) - flush_dcache_page(base_page + i); - } + if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) + __scatterwalk_flush_dcache_pages(sg_page(walk->sg), + walk->offset, nbytes); scatterwalk_advance(walk, nbytes); } |