hc
2024-10-12 a5969cabbb4660eab42b6ef0412cbbd1200cf14d
kernel/Documentation/vm/transhuge.rst
....@@ -4,8 +4,9 @@
44 Transparent Hugepage Support
55 ============================
66
7
-This document describes design principles Transparent Hugepage (THP)
8
-Support and its interaction with other parts of the memory management.
7
+This document describes design principles for Transparent Hugepage (THP)
8
+support and its interaction with other parts of the memory management
9
+system.
910
1011 Design principles
1112 =================
....@@ -37,31 +38,25 @@
3738
3839 get_user_pages and follow_page if run on a hugepage, will return the
3940 head or tail pages as usual (exactly as they would do on
40
-hugetlbfs). Most gup users will only care about the actual physical
41
+hugetlbfs). Most GUP users will only care about the actual physical
4142 address of the page and its temporary pinning to release after the I/O
4243 is complete, so they won't ever notice the fact the page is huge. But
4344 if any driver is going to mangle over the page structure of the tail
4445 page (like for checking page->mapping or other bits that are relevant
4546 for the head page and not the tail page), it should be updated to jump
46
-to check head page instead. Taking reference on any head/tail page would
47
-prevent page from being split by anyone.
47
+to check head page instead. Taking a reference on any head/tail page would
48
+prevent the page from being split by anyone.
4849
4950 .. note::
5051 these aren't new constraints to the GUP API, and they match the
51
- same constrains that applies to hugetlbfs too, so any driver capable
52
+ same constraints that apply to hugetlbfs too, so any driver capable
5253 of handling GUP on hugetlbfs will also work fine on transparent
5354 hugepage backed mappings.
5455
5556 In case you can't handle compound pages if they're returned by
56
-follow_page, the FOLL_SPLIT bit can be specified as parameter to
57
+follow_page, the FOLL_SPLIT bit can be specified as a parameter to
5758 follow_page, so that it will split the hugepages before returning
58
-them. Migration for example passes FOLL_SPLIT as parameter to
59
-follow_page because it's not hugepage aware and in fact it can't work
60
-at all on hugetlbfs (but it instead works fine on transparent
61
-hugepages thanks to FOLL_SPLIT). migration simply can't deal with
62
-hugepages being returned (as it's not only checking the pfn of the
63
-page and pinning it during the copy but it pretends to migrate the
64
-memory in regular page sizes and with regular pte/pmd mappings).
59
+them.
6560
6661 Graceful fallback
6762 =================
....@@ -72,11 +67,11 @@
7267 by just grepping for "pmd_offset" and adding split_huge_pmd where
7368 missing after pmd_offset returns the pmd. Thanks to the graceful
7469 fallback design, with a one liner change, you can avoid to write
75
-hundred if not thousand of lines of complex code to make your code
70
+hundreds if not thousands of lines of complex code to make your code
7671 hugepage aware.
7772
7873 If you're not walking pagetables but you run into a physical hugepage
79
-but you can't handle it natively in your code, you can split it by
74
+that you can't handle natively in your code, you can split it by
8075 calling split_huge_page(page). This is what the Linux VM does before
8176 it tries to swapout the hugepage for example. split_huge_page() can fail
8277 if the page is pinned and you must handle this correctly.
....@@ -103,18 +98,18 @@
10398
10499 To make pagetable walks huge pmd aware, all you need to do is to call
105100 pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
106
-mmap_sem in read (or write) mode to be sure an huge pmd cannot be
101
+mmap_lock in read (or write) mode to be sure a huge pmd cannot be
107102 created from under you by khugepaged (khugepaged collapse_huge_page
108
-takes the mmap_sem in write mode in addition to the anon_vma lock). If
103
+takes the mmap_lock in write mode in addition to the anon_vma lock). If
109104 pmd_trans_huge returns false, you just fallback in the old code
110105 paths. If instead pmd_trans_huge returns true, you have to take the
111106 page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
112
-page table lock will prevent the huge pmd to be converted into a
107
+page table lock will prevent the huge pmd being converted into a
113108 regular pmd from under you (split_huge_pmd can run in parallel to the
114109 pagetable walk). If the second pmd_trans_huge returns false, you
115110 should just drop the page table lock and fallback to the old code as
116
-before. Otherwise you can proceed to process the huge pmd and the
117
-hugepage natively. Once finished you can drop the page table lock.
111
+before. Otherwise, you can proceed to process the huge pmd and the
112
+hugepage natively. Once finished, you can drop the page table lock.
118113
119114 Refcounts and transparent huge pages
120115 ====================================
....@@ -122,61 +117,61 @@
122117 Refcounting on THP is mostly consistent with refcounting on other compound
123118 pages:
124119
125
- - get_page()/put_page() and GUP operate in head page's ->_refcount.
120
+ - get_page()/put_page() and GUP operate on head page's ->_refcount.
126121
127122 - ->_refcount in tail pages is always zero: get_page_unless_zero() never
128
- succeed on tail pages.
123
+ succeeds on tail pages.
129124
130125 - map/unmap of the pages with PTE entry increment/decrement ->_mapcount
131126 on relevant sub-page of the compound page.
132127
133
- - map/unmap of the whole compound page accounted in compound_mapcount
128
+ - map/unmap of the whole compound page is accounted for in compound_mapcount
134129 (stored in first tail page). For file huge pages, we also increment
135130 ->_mapcount of all sub-pages in order to have race-free detection of
136131 last unmap of subpages.
137132
138133 PageDoubleMap() indicates that the page is *possibly* mapped with PTEs.
139134
140
-For anonymous pages PageDoubleMap() also indicates ->_mapcount in all
135
+For anonymous pages, PageDoubleMap() also indicates ->_mapcount in all
141136 subpages is offset up by one. This additional reference is required to
142137 get race-free detection of unmap of subpages when we have them mapped with
143138 both PMDs and PTEs.
144139
145
-This is optimization required to lower overhead of per-subpage mapcount
146
-tracking. The alternative is alter ->_mapcount in all subpages on each
140
+This optimization is required to lower the overhead of per-subpage mapcount
141
+tracking. The alternative is to alter ->_mapcount in all subpages on each
147142 map/unmap of the whole compound page.
148143
149
-For anonymous pages, we set PG_double_map when a PMD of the page got split
150
-for the first time, but still have PMD mapping. The additional references
151
-go away with last compound_mapcount.
144
+For anonymous pages, we set PG_double_map when a PMD of the page is split
145
+for the first time, but still have a PMD mapping. The additional references
146
+go away with the last compound_mapcount.
152147
153
-File pages get PG_double_map set on first map of the page with PTE and
154
-goes away when the page gets evicted from page cache.
148
+File pages get PG_double_map set on the first map of the page with PTE and
149
+goes away when the page gets evicted from the page cache.
155150
156151 split_huge_page internally has to distribute the refcounts in the head
157152 page to the tail pages before clearing all PG_head/tail bits from the page
158153 structures. It can be done easily for refcounts taken by page table
159
-entries. But we don't have enough information on how to distribute any
154
+entries, but we don't have enough information on how to distribute any
160155 additional pins (i.e. from get_user_pages). split_huge_page() fails any
161
-requests to split pinned huge page: it expects page count to be equal to
162
-sum of mapcount of all sub-pages plus one (split_huge_page caller must
163
-have reference for head page).
156
+requests to split pinned huge pages: it expects page count to be equal to
157
+the sum of mapcount of all sub-pages plus one (split_huge_page caller must
158
+have a reference to the head page).
164159
165160 split_huge_page uses migration entries to stabilize page->_refcount and
166
-page->_mapcount of anonymous pages. File pages just got unmapped.
161
+page->_mapcount of anonymous pages. File pages just get unmapped.
167162
168
-We safe against physical memory scanners too: the only legitimate way
169
-scanner can get reference to a page is get_page_unless_zero().
163
+We are safe against physical memory scanners too: the only legitimate way
164
+a scanner can get a reference to a page is get_page_unless_zero().
170165
171166 All tail pages have zero ->_refcount until atomic_add(). This prevents the
172167 scanner from getting a reference to the tail page up to that point. After the
173
-atomic_add() we don't care about the ->_refcount value. We already known how
168
+atomic_add() we don't care about the ->_refcount value. We already know how
174169 many references should be uncharged from the head page.
175170
176171 For head page get_page_unless_zero() will succeed and we don't mind. It's
177
-clear where reference should go after split: it will stay on head page.
172
+clear where references should go after split: it will stay on the head page.
178173
179
-Note that split_huge_pmd() doesn't have any limitation on refcounting:
174
+Note that split_huge_pmd() doesn't have any limitations on refcounting:
180175 pmd can be split at any point and never fails.
181176
182177 Partial unmap and deferred_split_huge_page()
....@@ -188,10 +183,10 @@
188183 comes. Splitting will free up unused subpages.
189184
190185 Splitting the page right away is not an option due to locking context in
191
-the place where we can detect partial unmap. It's also might be
186
+the place where we can detect partial unmap. It also might be
192187 counterproductive since in many cases partial unmap happens during exit(2) if
193188 a THP crosses a VMA boundary.
194189
195
-Function deferred_split_huge_page() is used to queue page for splitting.
190
+The function deferred_split_huge_page() is used to queue a page for splitting.
196191 The splitting itself will happen when we get memory pressure via shrinker
197192 interface.