summaryrefslogtreecommitdiffstats
path: root/Documentation/vm
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/vm')
-rw-r--r--Documentation/vm/cleancache.txt2
-rw-r--r--Documentation/vm/slub.txt7
-rw-r--r--Documentation/vm/unevictable-lru.txt8
3 files changed, 10 insertions, 7 deletions
diff --git a/Documentation/vm/cleancache.txt b/Documentation/vm/cleancache.txt
index 36c367c7308..d5c615af10b 100644
--- a/Documentation/vm/cleancache.txt
+++ b/Documentation/vm/cleancache.txt
@@ -92,7 +92,7 @@ failed_gets - number of gets that failed
puts - number of puts attempted (all "succeed")
flushes - number of flushes attempted
-A backend implementatation may provide additional metrics.
+A backend implementation may provide additional metrics.
FAQ
diff --git a/Documentation/vm/slub.txt b/Documentation/vm/slub.txt
index f464f47bc60..6752870c497 100644
--- a/Documentation/vm/slub.txt
+++ b/Documentation/vm/slub.txt
@@ -117,7 +117,7 @@ can be influenced by kernel parameters:
slub_min_objects=x (default 4)
slub_min_order=x (default 0)
-slub_max_order=x (default 1)
+slub_max_order=x (default 3 (PAGE_ALLOC_COSTLY_ORDER))
slub_min_objects allows to specify how many objects must at least fit
into one slab in order for the allocation order to be acceptable.
@@ -131,7 +131,10 @@ slub_min_objects.
slub_max_order specified the order at which slub_min_objects should no
longer be checked. This is useful to avoid SLUB trying to generate
super large order pages to fit slub_min_objects of a slab cache with
-large object sizes into one high order page.
+large object sizes into one high order page. Setting command line
+parameter debug_guardpage_minorder=N (N > 0), forces setting
+slub_max_order to 0, what cause minimum possible order of slabs
+allocation.
SLUB Debug output
-----------------
diff --git a/Documentation/vm/unevictable-lru.txt b/Documentation/vm/unevictable-lru.txt
index 97bae3c576c..fa206cccf89 100644
--- a/Documentation/vm/unevictable-lru.txt
+++ b/Documentation/vm/unevictable-lru.txt
@@ -538,7 +538,7 @@ different reverse map mechanisms.
process because mlocked pages are migratable. However, for reclaim, if
the page is mapped into a VM_LOCKED VMA, the scan stops.
- try_to_unmap_anon() attempts to acquire in read mode the mmap semphore of
+ try_to_unmap_anon() attempts to acquire in read mode the mmap semaphore of
the mm_struct to which the VMA belongs. If this is successful, it will
mlock the page via mlock_vma_page() - we wouldn't have gotten to
try_to_unmap_anon() if the page were already mlocked - and will return
@@ -619,11 +619,11 @@ all PTEs from the page. For this purpose, the unevictable/mlock infrastructure
introduced a variant of try_to_unmap() called try_to_munlock().
try_to_munlock() calls the same functions as try_to_unmap() for anonymous and
-mapped file pages with an additional argument specifing unlock versus unmap
+mapped file pages with an additional argument specifying unlock versus unmap
processing. Again, these functions walk the respective reverse maps looking
for VM_LOCKED VMAs. When such a VMA is found for anonymous pages and file
pages mapped in linear VMAs, as in the try_to_unmap() case, the functions
-attempt to acquire the associated mmap semphore, mlock the page via
+attempt to acquire the associated mmap semaphore, mlock the page via
mlock_vma_page() and return SWAP_MLOCK. This effectively undoes the
pre-clearing of the page's PG_mlocked done by munlock_vma_page.
@@ -641,7 +641,7 @@ with it - the usual fallback position.
Note that try_to_munlock()'s reverse map walk must visit every VMA in a page's
reverse map to determine that a page is NOT mapped into any VM_LOCKED VMA.
However, the scan can terminate when it encounters a VM_LOCKED VMA and can
-successfully acquire the VMA's mmap semphore for read and mlock the page.
+successfully acquire the VMA's mmap semaphore for read and mlock the page.
Although try_to_munlock() might be called a great many times when munlocking a
large region or tearing down a large address space that has been mlocked via
mlockall(), overall this is a fairly rare event.