diff options
author | Adam Litke <agl@us.ibm.com> | 2006-08-18 11:22:21 -0700 |
---|---|---|
committer | Paul Mackerras <paulus@samba.org> | 2006-08-24 10:07:23 +1000 |
commit | c9169f8747bb282cbe518132bf7d49755a00b6c1 (patch) | |
tree | 1357eb203b7e3c80d6ea2036df664e1a3a401555 /arch/powerpc/mm | |
parent | d55c4a76f26160482158cd43788dcfc96a320a4f (diff) |
[POWERPC] hugepage BUG fix
On Tue, 2006-08-15 at 08:22 -0700, Dave Hansen wrote:
> kernel BUG in cache_free_debugcheck at mm/slab.c:2748!
Alright, this one is only triggered when slab debugging is enabled. The
slabs are assumed to be aligned on a HUGEPTE_TABLE_SIZE boundary. The free
path makes use of this assumption and uses the lowest nibble to pass around
an index into an array of kmem_cache pointers. With slab debugging turned
on, the slab is still aligned, but the "working" object pointer is not.
This would break the assumption above that a full nibble is available for
the PGF_CACHENUM_MASK.
The following patch reduces PGF_CACHENUM_MASK to cover only the two least
significant bits, which is enough to cover the current number of 4 pgtable
cache types. Then use this constant to mask out the appropriate part of
the huge pte pointer.
Signed-off-by: Adam Litke <agl@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Diffstat (limited to 'arch/powerpc/mm')
-rw-r--r-- | arch/powerpc/mm/hugetlbpage.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 266b8b2ceac..5615acc2952 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -153,7 +153,7 @@ static void free_hugepte_range(struct mmu_gather *tlb, hugepd_t *hpdp) hpdp->pd = 0; tlb->need_flush = 1; pgtable_free_tlb(tlb, pgtable_free_cache(hugepte, HUGEPTE_CACHE_NUM, - HUGEPTE_TABLE_SIZE-1)); + PGF_CACHENUM_MASK)); } #ifdef CONFIG_PPC_64K_PAGES |