From cb6c33d4dc09a8fddda1867708956c27615775f4 Mon Sep 17 00:00:00 2001 From: Wenchao Hao Date: Thu, 8 Dec 2022 22:21:30 +0800 Subject: cma: tracing: print alloc result in trace_cma_alloc_finish The result of the allocation attempt is not printed in trace_cma_alloc_finish, but it's important to do it so we can set filters to catch specific errors on allocation or to trigger some operations on specific errors. We have printed the result in log, but the log is conditional and could not be filtered by tracing events. It introduces little overhead to print this result. The result of allocation is named `errorno' in the trace. Link: https://lkml.kernel.org/r/20221208142130.1501195-1-haowenchao@huawei.com Signed-off-by: Wenchao Hao Cc: Masami Hiramatsu (Google) Cc: Steven Rostedt (Google) Signed-off-by: Andrew Morton --- mm/cma.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'mm/cma.c') diff --git a/mm/cma.c b/mm/cma.c index 4a978e09547a88..a75b17b03b66ad 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -491,7 +491,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, start = bitmap_no + mask + 1; } - trace_cma_alloc_finish(cma->name, pfn, page, count, align); + trace_cma_alloc_finish(cma->name, pfn, page, count, align, ret); /* * CMA can allocate multiple page blocks, which results in different -- cgit 1.2.3-korg From 148aa87e4f631e98d926d006604116fd2b2f3a93 Mon Sep 17 00:00:00 2001 From: Levi Yun Date: Wed, 18 Jan 2023 17:05:23 +0900 Subject: mm/cma: fix potential memory loss on cma_declare_contiguous_nid Suppose memblock_alloc_range_nid() with highmem_start succeeds when cma_declare_contiguous_nid is called with !fixed on a 32-bit system with PHYS_ADDR_T_64BIT enabled with memblock.bottom_up == false. But the next trial to memblock_alloc_range_nid() to allocate in [SIZE_4G, limits) nullifies former successfully allocated addr and it retries memblock_alloc_ragne_nid(). In this situation, the first successfully allocated address area is lost. Change the order of allocation (SIZE_4G, high_memory and base) and check whether the allocated succeeded to prevent potential memory loss. Link: https://lkml.kernel.org/r/20230118080523.44522-1-ppbuk5246@gmail.com Signed-off-by: Levi Yun Cc: Laurent Pinchart Cc: Marek Szyprowski Cc: Joonsoo Kim Cc: Minchan Kim Signed-off-by: Andrew Morton --- mm/cma.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) (limited to 'mm/cma.c') diff --git a/mm/cma.c b/mm/cma.c index a75b17b03b66ad..a7263aa02c92d6 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -321,18 +321,6 @@ int __init cma_declare_contiguous_nid(phys_addr_t base, } else { phys_addr_t addr = 0; - /* - * All pages in the reserved area must come from the same zone. - * If the requested region crosses the low/high memory boundary, - * try allocating from high memory first and fall back to low - * memory in case of failure. - */ - if (base < highmem_start && limit > highmem_start) { - addr = memblock_alloc_range_nid(size, alignment, - highmem_start, limit, nid, true); - limit = highmem_start; - } - /* * If there is enough memory, try a bottom-up allocation first. * It will place the new cma area close to the start of the node @@ -350,6 +338,18 @@ int __init cma_declare_contiguous_nid(phys_addr_t base, } #endif + /* + * All pages in the reserved area must come from the same zone. + * If the requested region crosses the low/high memory boundary, + * try allocating from high memory first and fall back to low + * memory in case of failure. + */ + if (!addr && base < highmem_start && limit > highmem_start) { + addr = memblock_alloc_range_nid(size, alignment, + highmem_start, limit, nid, true); + limit = highmem_start; + } + if (!addr) { addr = memblock_alloc_range_nid(size, alignment, base, limit, nid, true); -- cgit 1.2.3-korg