summaryrefslogtreecommitdiffstats
path: root/include/asm-x86/pgtable_64.h
diff options
context:
space:
mode:
authorHugh Dickins <hugh@veritas.com>2008-09-09 16:42:45 +0100
committerIngo Molnar <mingo@elte.hu>2008-09-10 10:00:42 +0200
commit91030ca1e739696812242c807b112ee3981a14be (patch)
treee4aeed87ee5909f51de37b924da1b593fc599c28 /include/asm-x86/pgtable_64.h
parentadee14b2e1557d0a8559f29681732d05a89dfc35 (diff)
x86: unsigned long pte_pfn
pte_pfn() has always been of type unsigned long, even on 32-bit PAE; but in the current tip/next/mm tree it works out to be unsigned long long on 64-bit, which gives an irritating warning if you try to printk a pfn with the usual %lx. Now use the same pte_pfn() function, moved from pgtable-3level.h to pgtable.h, for all models: as suggested by Jeremy Fitzhardinge. And pte_page() can well move along with it (remaining a macro to avoid dependence on mm_types.h). Signed-off-by: Hugh Dickins <hugh@veritas.com> Acked-by: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include/asm-x86/pgtable_64.h')
-rw-r--r--include/asm-x86/pgtable_64.h2
1 files changed, 0 insertions, 2 deletions
diff --git a/include/asm-x86/pgtable_64.h b/include/asm-x86/pgtable_64.h
index 549144d03d9..e454e4ec016 100644
--- a/include/asm-x86/pgtable_64.h
+++ b/include/asm-x86/pgtable_64.h
@@ -175,8 +175,6 @@ static inline int pmd_bad(pmd_t pmd)
#define pte_present(x) (pte_val((x)) & (_PAGE_PRESENT | _PAGE_PROTNONE))
#define pages_to_mb(x) ((x) >> (20 - PAGE_SHIFT)) /* FIXME: is this right? */
-#define pte_page(x) pfn_to_page(pte_pfn((x)))
-#define pte_pfn(x) ((pte_val((x)) & __PHYSICAL_MASK) >> PAGE_SHIFT)
/*
* Macro to mark a page protection value as "uncacheable".