diff options
author | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2005-09-23 13:24:07 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2005-09-23 13:35:36 -0700 |
commit | 67b108131df1230bad20a7279a8897de123d690b (patch) | |
tree | d9551fb8d2d610840e5538760ac5ba61c8c32c7e /arch/ppc64/mm/hugetlbpage.c | |
parent | 2601c2e278863cd48c01bce1377b4c9747893025 (diff) |
[PATCH] ppc64: Fix huge pages MMU mapping bug
Current kernel has a couple of sneaky bugs in the ppc64 hugetlb code that
cause huge pages to be potentially left stale in the hash table and TLBs
(improperly invalidated), with all the nasty consequences that can have.
One is that we forgot to set the "secondary" bit in the hash PTEs when
hashing a huge page in the secondary bucket (fortunately very rare).
The other one is on non-LPAR machines (like Apple G5s), flush_hash_range()
which is used to flush a batch of PTEs simply did not work for huge pages.
Historically, our huge page code didn't batch, but this was changed without
fixing this routine. This patch fixes both.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'arch/ppc64/mm/hugetlbpage.c')
-rw-r--r-- | arch/ppc64/mm/hugetlbpage.c | 7 |
1 files changed, 5 insertions, 2 deletions
diff --git a/arch/ppc64/mm/hugetlbpage.c b/arch/ppc64/mm/hugetlbpage.c index 338771ec70d..0ea0994ed97 100644 --- a/arch/ppc64/mm/hugetlbpage.c +++ b/arch/ppc64/mm/hugetlbpage.c @@ -710,10 +710,13 @@ repeat: hpte_group = ((~hash & htab_hash_mask) * HPTES_PER_GROUP) & ~0x7UL; slot = ppc_md.hpte_insert(hpte_group, va, prpn, - HPTE_V_LARGE, rflags); + HPTE_V_LARGE | + HPTE_V_SECONDARY, + rflags); if (slot == -1) { if (mftb() & 0x1) - hpte_group = ((hash & htab_hash_mask) * HPTES_PER_GROUP) & ~0x7UL; + hpte_group = ((hash & htab_hash_mask) * + HPTES_PER_GROUP)&~0x7UL; ppc_md.hpte_remove(hpte_group); goto repeat; |