diff options
author | Dave Hansen <dave@linux.vnet.ibm.com> | 2008-12-09 08:21:32 +0000 |
---|---|---|
committer | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2009-01-08 16:25:08 +1100 |
commit | c555e520ef794a94dc36a8ded93ece6369ff7ca0 (patch) | |
tree | b1be29d74a73ff31af677b8cd68019edd57ae24e /arch/powerpc/mm/numa.c | |
parent | afcb065450913745027169d906b9afc8294f7007 (diff) |
powerpc/mm: Add better comment on careful_allocation()
The behavior in careful_allocation() really confused me
at first. Add a comment to hopefully make it easier
on the next doofus that looks at it.
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Diffstat (limited to 'arch/powerpc/mm/numa.c')
-rw-r--r-- | arch/powerpc/mm/numa.c | 12 |
1 files changed, 10 insertions, 2 deletions
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index cf81049e1e5..213664c9cdc 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -840,8 +840,16 @@ static void __init *careful_allocation(int nid, unsigned long size, size, nid); /* - * If the memory came from a previously allocated node, we must - * retry with the bootmem allocator. + * We initialize the nodes in numeric order: 0, 1, 2... + * and hand over control from the LMB allocator to the + * bootmem allocator. If this function is called for + * node 5, then we know that all nodes <5 are using the + * bootmem allocator instead of the LMB allocator. + * + * So, check the nid from which this allocation came + * and double check to see if we need to use bootmem + * instead of the LMB. We don't free the LMB memory + * since it would be useless. */ new_nid = early_pfn_to_nid(ret >> PAGE_SHIFT); if (new_nid < nid) { |