diff options
author | Chris Metcalf <cmetcalf@tilera.com> | 2012-04-01 14:04:21 -0400 |
---|---|---|
committer | Chris Metcalf <cmetcalf@tilera.com> | 2012-05-25 12:48:27 -0400 |
commit | 621b19551507c8fd9d721f4038509c5bb155a983 (patch) | |
tree | 62d8d5e7a783364940153b4523fcfba821cee241 /arch/tile/include/asm/page.h | |
parent | d9ed9faac283a3be73f0e11a2ef49ee55aece4db (diff) |
arch/tile: support multiple huge page sizes dynamically
This change adds support for a new "super" bit in the PTE, using the new
arch_make_huge_pte() method. The Tilera hypervisor sees the bit set at a
given level of the page table and gangs together 4, 16, or 64 consecutive
pages from that level of the hierarchy to create a larger TLB entry.
One extra "super" page size can be specified at each of the three levels
of the page table hierarchy on tilegx, using the "hugepagesz" argument
on the boot command line. A new hypervisor API is added to allow Linux
to tell the hypervisor how many PTEs to gang together at each level of
the page table.
To allow pre-allocating huge pages larger than the buddy allocator can
handle, this change modifies the Tilera bootmem support to put all of
memory on tilegx platforms into bootmem.
As part of this change I eliminate the vestigial CONFIG_HIGHPTE support,
which never worked anyway, and eliminate the hv_page_size() API in favor
of the standard vma_kernel_pagesize() API.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Diffstat (limited to 'arch/tile/include/asm/page.h')
-rw-r--r-- | arch/tile/include/asm/page.h | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/arch/tile/include/asm/page.h b/arch/tile/include/asm/page.h index c750943f961..9d9131e5c55 100644 --- a/arch/tile/include/asm/page.h +++ b/arch/tile/include/asm/page.h @@ -87,8 +87,7 @@ typedef HV_PTE pgprot_t; /* * User L2 page tables are managed as one L2 page table per page, * because we use the page allocator for them. This keeps the allocation - * simple and makes it potentially useful to implement HIGHPTE at some point. - * However, it's also inefficient, since L2 page tables are much smaller + * simple, but it's also inefficient, since L2 page tables are much smaller * than pages (currently 2KB vs 64KB). So we should revisit this. */ typedef struct page *pgtable_t; @@ -137,7 +136,7 @@ static inline __attribute_const__ int get_order(unsigned long size) #define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT) -#define HUGE_MAX_HSTATE 2 +#define HUGE_MAX_HSTATE 6 #ifdef CONFIG_HUGETLB_PAGE #define HAVE_ARCH_HUGETLB_UNMAPPED_AREA |