.. | .. |
---|
4 | 4 | Transparent Hugepage Support |
---|
5 | 5 | ============================ |
---|
6 | 6 | |
---|
7 | | -This document describes design principles Transparent Hugepage (THP) |
---|
8 | | -Support and its interaction with other parts of the memory management. |
---|
| 7 | +This document describes design principles for Transparent Hugepage (THP) |
---|
| 8 | +support and its interaction with other parts of the memory management |
---|
| 9 | +system. |
---|
9 | 10 | |
---|
10 | 11 | Design principles |
---|
11 | 12 | ================= |
---|
.. | .. |
---|
37 | 38 | |
---|
38 | 39 | get_user_pages and follow_page if run on a hugepage, will return the |
---|
39 | 40 | head or tail pages as usual (exactly as they would do on |
---|
40 | | -hugetlbfs). Most gup users will only care about the actual physical |
---|
| 41 | +hugetlbfs). Most GUP users will only care about the actual physical |
---|
41 | 42 | address of the page and its temporary pinning to release after the I/O |
---|
42 | 43 | is complete, so they won't ever notice the fact the page is huge. But |
---|
43 | 44 | if any driver is going to mangle over the page structure of the tail |
---|
44 | 45 | page (like for checking page->mapping or other bits that are relevant |
---|
45 | 46 | for the head page and not the tail page), it should be updated to jump |
---|
46 | | -to check head page instead. Taking reference on any head/tail page would |
---|
47 | | -prevent page from being split by anyone. |
---|
| 47 | +to check head page instead. Taking a reference on any head/tail page would |
---|
| 48 | +prevent the page from being split by anyone. |
---|
48 | 49 | |
---|
49 | 50 | .. note:: |
---|
50 | 51 | these aren't new constraints to the GUP API, and they match the |
---|
51 | | - same constrains that applies to hugetlbfs too, so any driver capable |
---|
| 52 | + same constraints that apply to hugetlbfs too, so any driver capable |
---|
52 | 53 | of handling GUP on hugetlbfs will also work fine on transparent |
---|
53 | 54 | hugepage backed mappings. |
---|
54 | 55 | |
---|
55 | 56 | In case you can't handle compound pages if they're returned by |
---|
56 | | -follow_page, the FOLL_SPLIT bit can be specified as parameter to |
---|
| 57 | +follow_page, the FOLL_SPLIT bit can be specified as a parameter to |
---|
57 | 58 | follow_page, so that it will split the hugepages before returning |
---|
58 | | -them. Migration for example passes FOLL_SPLIT as parameter to |
---|
59 | | -follow_page because it's not hugepage aware and in fact it can't work |
---|
60 | | -at all on hugetlbfs (but it instead works fine on transparent |
---|
61 | | -hugepages thanks to FOLL_SPLIT). migration simply can't deal with |
---|
62 | | -hugepages being returned (as it's not only checking the pfn of the |
---|
63 | | -page and pinning it during the copy but it pretends to migrate the |
---|
64 | | -memory in regular page sizes and with regular pte/pmd mappings). |
---|
| 59 | +them. |
---|
65 | 60 | |
---|
66 | 61 | Graceful fallback |
---|
67 | 62 | ================= |
---|
.. | .. |
---|
72 | 67 | by just grepping for "pmd_offset" and adding split_huge_pmd where |
---|
73 | 68 | missing after pmd_offset returns the pmd. Thanks to the graceful |
---|
74 | 69 | fallback design, with a one liner change, you can avoid to write |
---|
75 | | -hundred if not thousand of lines of complex code to make your code |
---|
| 70 | +hundreds if not thousands of lines of complex code to make your code |
---|
76 | 71 | hugepage aware. |
---|
77 | 72 | |
---|
78 | 73 | If you're not walking pagetables but you run into a physical hugepage |
---|
79 | | -but you can't handle it natively in your code, you can split it by |
---|
| 74 | +that you can't handle natively in your code, you can split it by |
---|
80 | 75 | calling split_huge_page(page). This is what the Linux VM does before |
---|
81 | 76 | it tries to swapout the hugepage for example. split_huge_page() can fail |
---|
82 | 77 | if the page is pinned and you must handle this correctly. |
---|
.. | .. |
---|
103 | 98 | |
---|
104 | 99 | To make pagetable walks huge pmd aware, all you need to do is to call |
---|
105 | 100 | pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the |
---|
106 | | -mmap_sem in read (or write) mode to be sure an huge pmd cannot be |
---|
| 101 | +mmap_lock in read (or write) mode to be sure a huge pmd cannot be |
---|
107 | 102 | created from under you by khugepaged (khugepaged collapse_huge_page |
---|
108 | | -takes the mmap_sem in write mode in addition to the anon_vma lock). If |
---|
| 103 | +takes the mmap_lock in write mode in addition to the anon_vma lock). If |
---|
109 | 104 | pmd_trans_huge returns false, you just fallback in the old code |
---|
110 | 105 | paths. If instead pmd_trans_huge returns true, you have to take the |
---|
111 | 106 | page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the |
---|
112 | | -page table lock will prevent the huge pmd to be converted into a |
---|
| 107 | +page table lock will prevent the huge pmd being converted into a |
---|
113 | 108 | regular pmd from under you (split_huge_pmd can run in parallel to the |
---|
114 | 109 | pagetable walk). If the second pmd_trans_huge returns false, you |
---|
115 | 110 | should just drop the page table lock and fallback to the old code as |
---|
116 | | -before. Otherwise you can proceed to process the huge pmd and the |
---|
117 | | -hugepage natively. Once finished you can drop the page table lock. |
---|
| 111 | +before. Otherwise, you can proceed to process the huge pmd and the |
---|
| 112 | +hugepage natively. Once finished, you can drop the page table lock. |
---|
118 | 113 | |
---|
119 | 114 | Refcounts and transparent huge pages |
---|
120 | 115 | ==================================== |
---|
.. | .. |
---|
122 | 117 | Refcounting on THP is mostly consistent with refcounting on other compound |
---|
123 | 118 | pages: |
---|
124 | 119 | |
---|
125 | | - - get_page()/put_page() and GUP operate in head page's ->_refcount. |
---|
| 120 | + - get_page()/put_page() and GUP operate on head page's ->_refcount. |
---|
126 | 121 | |
---|
127 | 122 | - ->_refcount in tail pages is always zero: get_page_unless_zero() never |
---|
128 | | - succeed on tail pages. |
---|
| 123 | + succeeds on tail pages. |
---|
129 | 124 | |
---|
130 | 125 | - map/unmap of the pages with PTE entry increment/decrement ->_mapcount |
---|
131 | 126 | on relevant sub-page of the compound page. |
---|
132 | 127 | |
---|
133 | | - - map/unmap of the whole compound page accounted in compound_mapcount |
---|
| 128 | + - map/unmap of the whole compound page is accounted for in compound_mapcount |
---|
134 | 129 | (stored in first tail page). For file huge pages, we also increment |
---|
135 | 130 | ->_mapcount of all sub-pages in order to have race-free detection of |
---|
136 | 131 | last unmap of subpages. |
---|
137 | 132 | |
---|
138 | 133 | PageDoubleMap() indicates that the page is *possibly* mapped with PTEs. |
---|
139 | 134 | |
---|
140 | | -For anonymous pages PageDoubleMap() also indicates ->_mapcount in all |
---|
| 135 | +For anonymous pages, PageDoubleMap() also indicates ->_mapcount in all |
---|
141 | 136 | subpages is offset up by one. This additional reference is required to |
---|
142 | 137 | get race-free detection of unmap of subpages when we have them mapped with |
---|
143 | 138 | both PMDs and PTEs. |
---|
144 | 139 | |
---|
145 | | -This is optimization required to lower overhead of per-subpage mapcount |
---|
146 | | -tracking. The alternative is alter ->_mapcount in all subpages on each |
---|
| 140 | +This optimization is required to lower the overhead of per-subpage mapcount |
---|
| 141 | +tracking. The alternative is to alter ->_mapcount in all subpages on each |
---|
147 | 142 | map/unmap of the whole compound page. |
---|
148 | 143 | |
---|
149 | | -For anonymous pages, we set PG_double_map when a PMD of the page got split |
---|
150 | | -for the first time, but still have PMD mapping. The additional references |
---|
151 | | -go away with last compound_mapcount. |
---|
| 144 | +For anonymous pages, we set PG_double_map when a PMD of the page is split |
---|
| 145 | +for the first time, but still have a PMD mapping. The additional references |
---|
| 146 | +go away with the last compound_mapcount. |
---|
152 | 147 | |
---|
153 | | -File pages get PG_double_map set on first map of the page with PTE and |
---|
154 | | -goes away when the page gets evicted from page cache. |
---|
| 148 | +File pages get PG_double_map set on the first map of the page with PTE and |
---|
| 149 | +goes away when the page gets evicted from the page cache. |
---|
155 | 150 | |
---|
156 | 151 | split_huge_page internally has to distribute the refcounts in the head |
---|
157 | 152 | page to the tail pages before clearing all PG_head/tail bits from the page |
---|
158 | 153 | structures. It can be done easily for refcounts taken by page table |
---|
159 | | -entries. But we don't have enough information on how to distribute any |
---|
| 154 | +entries, but we don't have enough information on how to distribute any |
---|
160 | 155 | additional pins (i.e. from get_user_pages). split_huge_page() fails any |
---|
161 | | -requests to split pinned huge page: it expects page count to be equal to |
---|
162 | | -sum of mapcount of all sub-pages plus one (split_huge_page caller must |
---|
163 | | -have reference for head page). |
---|
| 156 | +requests to split pinned huge pages: it expects page count to be equal to |
---|
| 157 | +the sum of mapcount of all sub-pages plus one (split_huge_page caller must |
---|
| 158 | +have a reference to the head page). |
---|
164 | 159 | |
---|
165 | 160 | split_huge_page uses migration entries to stabilize page->_refcount and |
---|
166 | | -page->_mapcount of anonymous pages. File pages just got unmapped. |
---|
| 161 | +page->_mapcount of anonymous pages. File pages just get unmapped. |
---|
167 | 162 | |
---|
168 | | -We safe against physical memory scanners too: the only legitimate way |
---|
169 | | -scanner can get reference to a page is get_page_unless_zero(). |
---|
| 163 | +We are safe against physical memory scanners too: the only legitimate way |
---|
| 164 | +a scanner can get a reference to a page is get_page_unless_zero(). |
---|
170 | 165 | |
---|
171 | 166 | All tail pages have zero ->_refcount until atomic_add(). This prevents the |
---|
172 | 167 | scanner from getting a reference to the tail page up to that point. After the |
---|
173 | | -atomic_add() we don't care about the ->_refcount value. We already known how |
---|
| 168 | +atomic_add() we don't care about the ->_refcount value. We already know how |
---|
174 | 169 | many references should be uncharged from the head page. |
---|
175 | 170 | |
---|
176 | 171 | For head page get_page_unless_zero() will succeed and we don't mind. It's |
---|
177 | | -clear where reference should go after split: it will stay on head page. |
---|
| 172 | +clear where references should go after split: it will stay on the head page. |
---|
178 | 173 | |
---|
179 | | -Note that split_huge_pmd() doesn't have any limitation on refcounting: |
---|
| 174 | +Note that split_huge_pmd() doesn't have any limitations on refcounting: |
---|
180 | 175 | pmd can be split at any point and never fails. |
---|
181 | 176 | |
---|
182 | 177 | Partial unmap and deferred_split_huge_page() |
---|
.. | .. |
---|
188 | 183 | comes. Splitting will free up unused subpages. |
---|
189 | 184 | |
---|
190 | 185 | Splitting the page right away is not an option due to locking context in |
---|
191 | | -the place where we can detect partial unmap. It's also might be |
---|
| 186 | +the place where we can detect partial unmap. It also might be |
---|
192 | 187 | counterproductive since in many cases partial unmap happens during exit(2) if |
---|
193 | 188 | a THP crosses a VMA boundary. |
---|
194 | 189 | |
---|
195 | | -Function deferred_split_huge_page() is used to queue page for splitting. |
---|
| 190 | +The function deferred_split_huge_page() is used to queue a page for splitting. |
---|
196 | 191 | The splitting itself will happen when we get memory pressure via shrinker |
---|
197 | 192 | interface. |
---|