diff options
| author | Kefeng Wang <wangkefeng.wang@huawei.com> | 2025-09-10 21:39:58 +0800 |
|---|---|---|
| committer | Andrew Morton <akpm@linux-foundation.org> | 2025-09-21 14:22:12 -0700 |
| commit | 4fe2a8107f332a46ed284fb961a4ddb39a105509 (patch) | |
| tree | 8c64c9e645bd2767b3b406b684766e9be6e41460 /mm/hugetlb.c | |
| parent | dd4d324bc02c7b14ae5dd864d185d1403648c74d (diff) | |
mm: hugeltb: check NUMA_NO_NODE in only_alloc_fresh_hugetlb_folio()
Move the NUMA_NO_NODE check out of buddy and gigantic folio allocation to
cleanup code a bit, also this will avoid NUMA_NO_NODE passed as 'nid' to
node_isset() in alloc_buddy_hugetlb_folio().
Link: https://lkml.kernel.org/r/20250910133958.301467-6-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Cc: Brendan Jackman <jackmanb@google.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/hugetlb.c')
| -rw-r--r-- | mm/hugetlb.c | 7 |
1 files changed, 3 insertions, 4 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1783b9e7c338..d2471a0b6002 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1479,8 +1479,6 @@ static struct folio *alloc_gigantic_folio(int order, gfp_t gfp_mask, struct folio *folio; bool retried = false; - if (nid == NUMA_NO_NODE) - nid = numa_mem_id(); retry: folio = hugetlb_cma_alloc_folio(order, gfp_mask, nid, nodemask); if (!folio) { @@ -1942,8 +1940,6 @@ static struct folio *alloc_buddy_hugetlb_folio(int order, gfp_t gfp_mask, alloc_try_hard = false; if (alloc_try_hard) gfp_mask |= __GFP_RETRY_MAYFAIL; - if (nid == NUMA_NO_NODE) - nid = numa_mem_id(); folio = (struct folio *)__alloc_frozen_pages(gfp_mask, order, nid, nmask); @@ -1979,6 +1975,9 @@ static struct folio *only_alloc_fresh_hugetlb_folio(struct hstate *h, struct folio *folio; int order = huge_page_order(h); + if (nid == NUMA_NO_NODE) + nid = numa_mem_id(); + if (order_is_gigantic(order)) folio = alloc_gigantic_folio(order, gfp_mask, nid, nmask); else |