.. | .. |
---|
150 | 150 | If you have a data structure which is only ever accessed from user |
---|
151 | 151 | context, then you can use a simple mutex (``include/linux/mutex.h``) to |
---|
152 | 152 | protect it. This is the most trivial case: you initialize the mutex. |
---|
153 | | -Then you can call :c:func:`mutex_lock_interruptible()` to grab the |
---|
154 | | -mutex, and :c:func:`mutex_unlock()` to release it. There is also a |
---|
155 | | -:c:func:`mutex_lock()`, which should be avoided, because it will |
---|
| 153 | +Then you can call mutex_lock_interruptible() to grab the |
---|
| 154 | +mutex, and mutex_unlock() to release it. There is also a |
---|
| 155 | +mutex_lock(), which should be avoided, because it will |
---|
156 | 156 | not return if a signal is received. |
---|
157 | 157 | |
---|
158 | 158 | Example: ``net/netfilter/nf_sockopt.c`` allows registration of new |
---|
159 | | -:c:func:`setsockopt()` and :c:func:`getsockopt()` calls, with |
---|
160 | | -:c:func:`nf_register_sockopt()`. Registration and de-registration |
---|
| 159 | +setsockopt() and getsockopt() calls, with |
---|
| 160 | +nf_register_sockopt(). Registration and de-registration |
---|
161 | 161 | are only done on module load and unload (and boot time, where there is |
---|
162 | 162 | no concurrency), and the list of registrations is only consulted for an |
---|
163 | | -unknown :c:func:`setsockopt()` or :c:func:`getsockopt()` system |
---|
| 163 | +unknown setsockopt() or getsockopt() system |
---|
164 | 164 | call. The ``nf_sockopt_mutex`` is perfect to protect this, especially |
---|
165 | 165 | since the setsockopt and getsockopt calls may well sleep. |
---|
166 | 166 | |
---|
.. | .. |
---|
170 | 170 | If a softirq shares data with user context, you have two problems. |
---|
171 | 171 | Firstly, the current user context can be interrupted by a softirq, and |
---|
172 | 172 | secondly, the critical region could be entered from another CPU. This is |
---|
173 | | -where :c:func:`spin_lock_bh()` (``include/linux/spinlock.h``) is |
---|
| 173 | +where spin_lock_bh() (``include/linux/spinlock.h``) is |
---|
174 | 174 | used. It disables softirqs on that CPU, then grabs the lock. |
---|
175 | | -:c:func:`spin_unlock_bh()` does the reverse. (The '_bh' suffix is |
---|
| 175 | +spin_unlock_bh() does the reverse. (The '_bh' suffix is |
---|
176 | 176 | a historical reference to "Bottom Halves", the old name for software |
---|
177 | 177 | interrupts. It should really be called spin_lock_softirq()' in a |
---|
178 | 178 | perfect world). |
---|
179 | 179 | |
---|
180 | | -Note that you can also use :c:func:`spin_lock_irq()` or |
---|
181 | | -:c:func:`spin_lock_irqsave()` here, which stop hardware interrupts |
---|
| 180 | +Note that you can also use spin_lock_irq() or |
---|
| 181 | +spin_lock_irqsave() here, which stop hardware interrupts |
---|
182 | 182 | as well: see `Hard IRQ Context <#hard-irq-context>`__. |
---|
183 | 183 | |
---|
184 | 184 | This works perfectly for UP as well: the spin lock vanishes, and this |
---|
185 | | -macro simply becomes :c:func:`local_bh_disable()` |
---|
| 185 | +macro simply becomes local_bh_disable() |
---|
186 | 186 | (``include/linux/interrupt.h``), which protects you from the softirq |
---|
187 | 187 | being run. |
---|
188 | 188 | |
---|
.. | .. |
---|
216 | 216 | ~~~~~~~~~~~~~~~~~~~~~~~~~ |
---|
217 | 217 | |
---|
218 | 218 | If another tasklet/timer wants to share data with your tasklet or timer |
---|
219 | | -, you will both need to use :c:func:`spin_lock()` and |
---|
220 | | -:c:func:`spin_unlock()` calls. :c:func:`spin_lock_bh()` is |
---|
| 219 | +, you will both need to use spin_lock() and |
---|
| 220 | +spin_unlock() calls. spin_lock_bh() is |
---|
221 | 221 | unnecessary here, as you are already in a tasklet, and none will be run |
---|
222 | 222 | on the same CPU. |
---|
223 | 223 | |
---|
.. | .. |
---|
234 | 234 | going so far as to use a softirq, you probably care about scalable |
---|
235 | 235 | performance enough to justify the extra complexity. |
---|
236 | 236 | |
---|
237 | | -You'll need to use :c:func:`spin_lock()` and |
---|
238 | | -:c:func:`spin_unlock()` for shared data. |
---|
| 237 | +You'll need to use spin_lock() and |
---|
| 238 | +spin_unlock() for shared data. |
---|
239 | 239 | |
---|
240 | 240 | Different Softirqs |
---|
241 | 241 | ~~~~~~~~~~~~~~~~~~ |
---|
242 | 242 | |
---|
243 | | -You'll need to use :c:func:`spin_lock()` and |
---|
244 | | -:c:func:`spin_unlock()` for shared data, whether it be a timer, |
---|
| 243 | +You'll need to use spin_lock() and |
---|
| 244 | +spin_unlock() for shared data, whether it be a timer, |
---|
245 | 245 | tasklet, different softirq or the same or another softirq: any of them |
---|
246 | 246 | could be running on a different CPU. |
---|
247 | 247 | |
---|
.. | .. |
---|
259 | 259 | concerns. Firstly, the softirq processing can be interrupted by a |
---|
260 | 260 | hardware interrupt, and secondly, the critical region could be entered |
---|
261 | 261 | by a hardware interrupt on another CPU. This is where |
---|
262 | | -:c:func:`spin_lock_irq()` is used. It is defined to disable |
---|
| 262 | +spin_lock_irq() is used. It is defined to disable |
---|
263 | 263 | interrupts on that cpu, then grab the lock. |
---|
264 | | -:c:func:`spin_unlock_irq()` does the reverse. |
---|
| 264 | +spin_unlock_irq() does the reverse. |
---|
265 | 265 | |
---|
266 | | -The irq handler does not to use :c:func:`spin_lock_irq()`, because |
---|
| 266 | +The irq handler does not need to use spin_lock_irq(), because |
---|
267 | 267 | the softirq cannot run while the irq handler is running: it can use |
---|
268 | | -:c:func:`spin_lock()`, which is slightly faster. The only exception |
---|
| 268 | +spin_lock(), which is slightly faster. The only exception |
---|
269 | 269 | would be if a different hardware irq handler uses the same lock: |
---|
270 | | -:c:func:`spin_lock_irq()` will stop that from interrupting us. |
---|
| 270 | +spin_lock_irq() will stop that from interrupting us. |
---|
271 | 271 | |
---|
272 | 272 | This works perfectly for UP as well: the spin lock vanishes, and this |
---|
273 | | -macro simply becomes :c:func:`local_irq_disable()` |
---|
| 273 | +macro simply becomes local_irq_disable() |
---|
274 | 274 | (``include/asm/smp.h``), which protects you from the softirq/tasklet/BH |
---|
275 | 275 | being run. |
---|
276 | 276 | |
---|
277 | | -:c:func:`spin_lock_irqsave()` (``include/linux/spinlock.h``) is a |
---|
| 277 | +spin_lock_irqsave() (``include/linux/spinlock.h``) is a |
---|
278 | 278 | variant which saves whether interrupts were on or off in a flags word, |
---|
279 | | -which is passed to :c:func:`spin_unlock_irqrestore()`. This means |
---|
| 279 | +which is passed to spin_unlock_irqrestore(). This means |
---|
280 | 280 | that the same code can be used inside an hard irq handler (where |
---|
281 | 281 | interrupts are already off) and in softirqs (where the irq disabling is |
---|
282 | 282 | required). |
---|
283 | 283 | |
---|
284 | 284 | Note that softirqs (and hence tasklets and timers) are run on return |
---|
285 | | -from hardware interrupts, so :c:func:`spin_lock_irq()` also stops |
---|
286 | | -these. In that sense, :c:func:`spin_lock_irqsave()` is the most |
---|
| 285 | +from hardware interrupts, so spin_lock_irq() also stops |
---|
| 286 | +these. In that sense, spin_lock_irqsave() is the most |
---|
287 | 287 | general and powerful locking function. |
---|
288 | 288 | |
---|
289 | 289 | Locking Between Two Hard IRQ Handlers |
---|
290 | 290 | ------------------------------------- |
---|
291 | 291 | |
---|
292 | 292 | It is rare to have to share data between two IRQ handlers, but if you |
---|
293 | | -do, :c:func:`spin_lock_irqsave()` should be used: it is |
---|
| 293 | +do, spin_lock_irqsave() should be used: it is |
---|
294 | 294 | architecture-specific whether all interrupts are disabled inside irq |
---|
295 | 295 | handlers themselves. |
---|
296 | 296 | |
---|
.. | .. |
---|
304 | 304 | (``copy_from_user*(`` or ``kmalloc(x,GFP_KERNEL)``). |
---|
305 | 305 | |
---|
306 | 306 | - Otherwise (== data can be touched in an interrupt), use |
---|
307 | | - :c:func:`spin_lock_irqsave()` and |
---|
308 | | - :c:func:`spin_unlock_irqrestore()`. |
---|
| 307 | + spin_lock_irqsave() and |
---|
| 308 | + spin_unlock_irqrestore(). |
---|
309 | 309 | |
---|
310 | 310 | - Avoid holding spinlock for more than 5 lines of code and across any |
---|
311 | | - function call (except accessors like :c:func:`readb()`). |
---|
| 311 | + function call (except accessors like readb()). |
---|
312 | 312 | |
---|
313 | 313 | Table of Minimum Requirements |
---|
314 | 314 | ----------------------------- |
---|
.. | .. |
---|
320 | 320 | shares data with another thread, locking is required). |
---|
321 | 321 | |
---|
322 | 322 | Remember the advice above: you can always use |
---|
323 | | -:c:func:`spin_lock_irqsave()`, which is a superset of all other |
---|
| 323 | +spin_lock_irqsave(), which is a superset of all other |
---|
324 | 324 | spinlock primitives. |
---|
325 | 325 | |
---|
326 | 326 | ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ============== |
---|
.. | .. |
---|
363 | 363 | lock when some other thread is holding the lock. You should acquire the |
---|
364 | 364 | lock later if you then need access to the data protected with the lock. |
---|
365 | 365 | |
---|
366 | | -:c:func:`spin_trylock()` does not spin but returns non-zero if it |
---|
| 366 | +spin_trylock() does not spin but returns non-zero if it |
---|
367 | 367 | acquires the spinlock on the first try or 0 if not. This function can be |
---|
368 | | -used in all contexts like :c:func:`spin_lock()`: you must have |
---|
| 368 | +used in all contexts like spin_lock(): you must have |
---|
369 | 369 | disabled the contexts that might interrupt you and acquire the spin |
---|
370 | 370 | lock. |
---|
371 | 371 | |
---|
372 | | -:c:func:`mutex_trylock()` does not suspend your task but returns |
---|
| 372 | +mutex_trylock() does not suspend your task but returns |
---|
373 | 373 | non-zero if it could lock the mutex on the first try or 0 if not. This |
---|
374 | 374 | function cannot be safely used in hardware or software interrupt |
---|
375 | 375 | contexts despite not sleeping. |
---|
.. | .. |
---|
451 | 451 | if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL) |
---|
452 | 452 | return -ENOMEM; |
---|
453 | 453 | |
---|
454 | | - strlcpy(obj->name, name, sizeof(obj->name)); |
---|
| 454 | + strscpy(obj->name, name, sizeof(obj->name)); |
---|
455 | 455 | obj->id = id; |
---|
456 | 456 | obj->popularity = 0; |
---|
457 | 457 | |
---|
.. | .. |
---|
490 | 490 | objects directly. |
---|
491 | 491 | |
---|
492 | 492 | There is a slight (and common) optimization here: in |
---|
493 | | -:c:func:`cache_add()` we set up the fields of the object before |
---|
| 493 | +cache_add() we set up the fields of the object before |
---|
494 | 494 | grabbing the lock. This is safe, as no-one else can access it until we |
---|
495 | 495 | put it in cache. |
---|
496 | 496 | |
---|
497 | 497 | Accessing From Interrupt Context |
---|
498 | 498 | -------------------------------- |
---|
499 | 499 | |
---|
500 | | -Now consider the case where :c:func:`cache_find()` can be called |
---|
| 500 | +Now consider the case where cache_find() can be called |
---|
501 | 501 | from interrupt context: either a hardware interrupt or a softirq. An |
---|
502 | 502 | example would be a timer which deletes object from the cache. |
---|
503 | 503 | |
---|
.. | .. |
---|
566 | 566 | return ret; |
---|
567 | 567 | } |
---|
568 | 568 | |
---|
569 | | -Note that the :c:func:`spin_lock_irqsave()` will turn off |
---|
| 569 | +Note that the spin_lock_irqsave() will turn off |
---|
570 | 570 | interrupts if they are on, otherwise does nothing (if we are already in |
---|
571 | 571 | an interrupt handler), hence these functions are safe to call from any |
---|
572 | 572 | context. |
---|
573 | 573 | |
---|
574 | | -Unfortunately, :c:func:`cache_add()` calls :c:func:`kmalloc()` |
---|
| 574 | +Unfortunately, cache_add() calls kmalloc() |
---|
575 | 575 | with the ``GFP_KERNEL`` flag, which is only legal in user context. I |
---|
576 | | -have assumed that :c:func:`cache_add()` is still only called in |
---|
| 576 | +have assumed that cache_add() is still only called in |
---|
577 | 577 | user context, otherwise this should become a parameter to |
---|
578 | | -:c:func:`cache_add()`. |
---|
| 578 | +cache_add(). |
---|
579 | 579 | |
---|
580 | 580 | Exposing Objects Outside This File |
---|
581 | 581 | ---------------------------------- |
---|
.. | .. |
---|
592 | 592 | The second problem is the lifetime problem: if another structure keeps a |
---|
593 | 593 | pointer to an object, it presumably expects that pointer to remain |
---|
594 | 594 | valid. Unfortunately, this is only guaranteed while you hold the lock, |
---|
595 | | -otherwise someone might call :c:func:`cache_delete()` and even |
---|
| 595 | +otherwise someone might call cache_delete() and even |
---|
596 | 596 | worse, add another object, re-using the same address. |
---|
597 | 597 | |
---|
598 | 598 | As there is only one lock, you can't hold it forever: no-one else would |
---|
.. | .. |
---|
660 | 660 | } |
---|
661 | 661 | |
---|
662 | 662 | @@ -63,6 +94,7 @@ |
---|
663 | | - strlcpy(obj->name, name, sizeof(obj->name)); |
---|
| 663 | + strscpy(obj->name, name, sizeof(obj->name)); |
---|
664 | 664 | obj->id = id; |
---|
665 | 665 | obj->popularity = 0; |
---|
666 | 666 | + obj->refcnt = 1; /* The cache holds a reference */ |
---|
.. | .. |
---|
693 | 693 | |
---|
694 | 694 | We encapsulate the reference counting in the standard 'get' and 'put' |
---|
695 | 695 | functions. Now we can return the object itself from |
---|
696 | | -:c:func:`cache_find()` which has the advantage that the user can |
---|
697 | | -now sleep holding the object (eg. to :c:func:`copy_to_user()` to |
---|
| 696 | +cache_find() which has the advantage that the user can |
---|
| 697 | +now sleep holding the object (eg. to copy_to_user() to |
---|
698 | 698 | name to userspace). |
---|
699 | 699 | |
---|
700 | 700 | The other point to note is that I said a reference should be held for |
---|
.. | .. |
---|
710 | 710 | are guaranteed to be seen atomically from all CPUs in the system, so no |
---|
711 | 711 | lock is required. In this case, it is simpler than using spinlocks, |
---|
712 | 712 | although for anything non-trivial using spinlocks is clearer. The |
---|
713 | | -:c:func:`atomic_inc()` and :c:func:`atomic_dec_and_test()` |
---|
| 713 | +atomic_inc() and atomic_dec_and_test() |
---|
714 | 714 | are used instead of the standard increment and decrement operators, and |
---|
715 | 715 | the lock is no longer used to protect the reference count itself. |
---|
716 | 716 | |
---|
.. | .. |
---|
774 | 774 | } |
---|
775 | 775 | |
---|
776 | 776 | @@ -94,7 +76,7 @@ |
---|
777 | | - strlcpy(obj->name, name, sizeof(obj->name)); |
---|
| 777 | + strscpy(obj->name, name, sizeof(obj->name)); |
---|
778 | 778 | obj->id = id; |
---|
779 | 779 | obj->popularity = 0; |
---|
780 | 780 | - obj->refcnt = 1; /* The cache holds a reference */ |
---|
.. | .. |
---|
802 | 802 | - You can make ``cache_lock`` non-static, and tell people to grab that |
---|
803 | 803 | lock before changing the name in any object. |
---|
804 | 804 | |
---|
805 | | -- You can provide a :c:func:`cache_obj_rename()` which grabs this |
---|
| 805 | +- You can provide a cache_obj_rename() which grabs this |
---|
806 | 806 | lock and changes the name for the caller, and tell everyone to use |
---|
807 | 807 | that function. |
---|
808 | 808 | |
---|
.. | .. |
---|
861 | 861 | ``cache_lock`` rather than the per-object lock: this is because it (like |
---|
862 | 862 | the :c:type:`struct list_head <list_head>` inside the object) |
---|
863 | 863 | is logically part of the infrastructure. This way, I don't need to grab |
---|
864 | | -the lock of every object in :c:func:`__cache_add()` when seeking |
---|
| 864 | +the lock of every object in __cache_add() when seeking |
---|
865 | 865 | the least popular. |
---|
866 | 866 | |
---|
867 | 867 | I also decided that the id member is unchangeable, so I don't need to |
---|
868 | | -grab each object lock in :c:func:`__cache_find()` to examine the |
---|
| 868 | +grab each object lock in __cache_find() to examine the |
---|
869 | 869 | id: the object lock is only used by a caller who wants to read or write |
---|
870 | 870 | the name field. |
---|
871 | 871 | |
---|
.. | .. |
---|
887 | 887 | stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem. |
---|
888 | 888 | |
---|
889 | 889 | For a slightly more complex case, imagine you have a region shared by a |
---|
890 | | -softirq and user context. If you use a :c:func:`spin_lock()` call |
---|
| 890 | +softirq and user context. If you use a spin_lock() call |
---|
891 | 891 | to protect it, it is possible that the user context will be interrupted |
---|
892 | 892 | by the softirq while it holds the lock, and the softirq will then spin |
---|
893 | 893 | forever trying to get the same lock. |
---|
.. | .. |
---|
985 | 985 | |
---|
986 | 986 | |
---|
987 | 987 | Sooner or later, this will crash on SMP, because a timer can have just |
---|
988 | | -gone off before the :c:func:`spin_lock_bh()`, and it will only get |
---|
989 | | -the lock after we :c:func:`spin_unlock_bh()`, and then try to free |
---|
| 988 | +gone off before the spin_lock_bh(), and it will only get |
---|
| 989 | +the lock after we spin_unlock_bh(), and then try to free |
---|
990 | 990 | the element (which has already been freed!). |
---|
991 | 991 | |
---|
992 | 992 | This can be avoided by checking the result of |
---|
993 | | -:c:func:`del_timer()`: if it returns 1, the timer has been deleted. |
---|
| 993 | +del_timer(): if it returns 1, the timer has been deleted. |
---|
994 | 994 | If 0, it means (in this case) that it is currently running, so we can |
---|
995 | 995 | do:: |
---|
996 | 996 | |
---|
.. | .. |
---|
1012 | 1012 | |
---|
1013 | 1013 | |
---|
1014 | 1014 | Another common problem is deleting timers which restart themselves (by |
---|
1015 | | -calling :c:func:`add_timer()` at the end of their timer function). |
---|
| 1015 | +calling add_timer() at the end of their timer function). |
---|
1016 | 1016 | Because this is a fairly common case which is prone to races, you should |
---|
1017 | | -use :c:func:`del_timer_sync()` (``include/linux/timer.h``) to |
---|
| 1017 | +use del_timer_sync() (``include/linux/timer.h``) to |
---|
1018 | 1018 | handle this case. It returns the number of times the timer had to be |
---|
1019 | 1019 | deleted before we finally stopped it from adding itself back in. |
---|
1020 | 1020 | |
---|
.. | .. |
---|
1086 | 1086 | list->next = new; |
---|
1087 | 1087 | |
---|
1088 | 1088 | |
---|
1089 | | -The :c:func:`wmb()` is a write memory barrier. It ensures that the |
---|
| 1089 | +The wmb() is a write memory barrier. It ensures that the |
---|
1090 | 1090 | first operation (setting the new element's ``next`` pointer) is complete |
---|
1091 | 1091 | and will be seen by all CPUs, before the second operation is (putting |
---|
1092 | 1092 | the new element into the list). This is important, since modern |
---|
.. | .. |
---|
1097 | 1097 | |
---|
1098 | 1098 | Fortunately, there is a function to do this for standard |
---|
1099 | 1099 | :c:type:`struct list_head <list_head>` lists: |
---|
1100 | | -:c:func:`list_add_rcu()` (``include/linux/list.h``). |
---|
| 1100 | +list_add_rcu() (``include/linux/list.h``). |
---|
1101 | 1101 | |
---|
1102 | 1102 | Removing an element from the list is even simpler: we replace the |
---|
1103 | 1103 | pointer to the old element with a pointer to its successor, and readers |
---|
.. | .. |
---|
1108 | 1108 | list->next = old->next; |
---|
1109 | 1109 | |
---|
1110 | 1110 | |
---|
1111 | | -There is :c:func:`list_del_rcu()` (``include/linux/list.h``) which |
---|
| 1111 | +There is list_del_rcu() (``include/linux/list.h``) which |
---|
1112 | 1112 | does this (the normal version poisons the old object, which we don't |
---|
1113 | 1113 | want). |
---|
1114 | 1114 | |
---|
.. | .. |
---|
1116 | 1116 | pointer to start reading the contents of the next element early, but |
---|
1117 | 1117 | don't realize that the pre-fetched contents is wrong when the ``next`` |
---|
1118 | 1118 | pointer changes underneath them. Once again, there is a |
---|
1119 | | -:c:func:`list_for_each_entry_rcu()` (``include/linux/list.h``) |
---|
| 1119 | +list_for_each_entry_rcu() (``include/linux/list.h``) |
---|
1120 | 1120 | to help you. Of course, writers can just use |
---|
1121 | | -:c:func:`list_for_each_entry()`, since there cannot be two |
---|
| 1121 | +list_for_each_entry(), since there cannot be two |
---|
1122 | 1122 | simultaneous writers. |
---|
1123 | 1123 | |
---|
1124 | 1124 | Our final dilemma is this: when can we actually destroy the removed |
---|
.. | .. |
---|
1127 | 1127 | changes, the reader will jump off into garbage and crash. We need to |
---|
1128 | 1128 | wait until we know that all the readers who were traversing the list |
---|
1129 | 1129 | when we deleted the element are finished. We use |
---|
1130 | | -:c:func:`call_rcu()` to register a callback which will actually |
---|
| 1130 | +call_rcu() to register a callback which will actually |
---|
1131 | 1131 | destroy the object once all pre-existing readers are finished. |
---|
1132 | | -Alternatively, :c:func:`synchronize_rcu()` may be used to block |
---|
| 1132 | +Alternatively, synchronize_rcu() may be used to block |
---|
1133 | 1133 | until all pre-existing are finished. |
---|
1134 | 1134 | |
---|
1135 | 1135 | But how does Read Copy Update know when the readers are finished? The |
---|
1136 | 1136 | method is this: firstly, the readers always traverse the list inside |
---|
1137 | | -:c:func:`rcu_read_lock()`/:c:func:`rcu_read_unlock()` pairs: |
---|
| 1137 | +rcu_read_lock()/rcu_read_unlock() pairs: |
---|
1138 | 1138 | these simply disable preemption so the reader won't go to sleep while |
---|
1139 | 1139 | reading the list. |
---|
1140 | 1140 | |
---|
.. | .. |
---|
1223 | 1223 | } |
---|
1224 | 1224 | |
---|
1225 | 1225 | Note that the reader will alter the popularity member in |
---|
1226 | | -:c:func:`__cache_find()`, and now it doesn't hold a lock. One |
---|
| 1226 | +__cache_find(), and now it doesn't hold a lock. One |
---|
1227 | 1227 | solution would be to make it an ``atomic_t``, but for this usage, we |
---|
1228 | 1228 | don't really care about races: an approximate result is good enough, so |
---|
1229 | 1229 | I didn't change it. |
---|
1230 | 1230 | |
---|
1231 | | -The result is that :c:func:`cache_find()` requires no |
---|
| 1231 | +The result is that cache_find() requires no |
---|
1232 | 1232 | synchronization with any other functions, so is almost as fast on SMP as |
---|
1233 | 1233 | it would be on UP. |
---|
1234 | 1234 | |
---|
.. | .. |
---|
1240 | 1240 | |
---|
1241 | 1241 | Now, because the 'read lock' in RCU is simply disabling preemption, a |
---|
1242 | 1242 | caller which always has preemption disabled between calling |
---|
1243 | | -:c:func:`cache_find()` and :c:func:`object_put()` does not |
---|
| 1243 | +cache_find() and object_put() does not |
---|
1244 | 1244 | need to actually get and put the reference count: we could expose |
---|
1245 | | -:c:func:`__cache_find()` by making it non-static, and such |
---|
| 1245 | +__cache_find() by making it non-static, and such |
---|
1246 | 1246 | callers could simply call that. |
---|
1247 | 1247 | |
---|
1248 | 1248 | The benefit here is that the reference count is not written to: the |
---|
.. | .. |
---|
1260 | 1260 | If that was too slow (it's usually not, but if you've got a really big |
---|
1261 | 1261 | machine to test on and can show that it is), you could instead use a |
---|
1262 | 1262 | counter for each CPU, then none of them need an exclusive lock. See |
---|
1263 | | -:c:func:`DEFINE_PER_CPU()`, :c:func:`get_cpu_var()` and |
---|
1264 | | -:c:func:`put_cpu_var()` (``include/linux/percpu.h``). |
---|
| 1263 | +DEFINE_PER_CPU(), get_cpu_var() and |
---|
| 1264 | +put_cpu_var() (``include/linux/percpu.h``). |
---|
1265 | 1265 | |
---|
1266 | 1266 | Of particular use for simple per-cpu counters is the ``local_t`` type, |
---|
1267 | | -and the :c:func:`cpu_local_inc()` and related functions, which are |
---|
| 1267 | +and the cpu_local_inc() and related functions, which are |
---|
1268 | 1268 | more efficient than simple code on some architectures |
---|
1269 | 1269 | (``include/asm/local.h``). |
---|
1270 | 1270 | |
---|
.. | .. |
---|
1289 | 1289 | enable_irq(irq); |
---|
1290 | 1290 | spin_unlock(&lock); |
---|
1291 | 1291 | |
---|
1292 | | -The :c:func:`disable_irq()` prevents the irq handler from running |
---|
| 1292 | +The disable_irq() prevents the irq handler from running |
---|
1293 | 1293 | (and waits for it to finish if it's currently running on other CPUs). |
---|
1294 | 1294 | The spinlock prevents any other accesses happening at the same time. |
---|
1295 | | -Naturally, this is slower than just a :c:func:`spin_lock_irq()` |
---|
| 1295 | +Naturally, this is slower than just a spin_lock_irq() |
---|
1296 | 1296 | call, so it only makes sense if this type of access happens extremely |
---|
1297 | 1297 | rarely. |
---|
1298 | 1298 | |
---|
.. | .. |
---|
1315 | 1315 | |
---|
1316 | 1316 | - Accesses to userspace: |
---|
1317 | 1317 | |
---|
1318 | | - - :c:func:`copy_from_user()` |
---|
| 1318 | + - copy_from_user() |
---|
1319 | 1319 | |
---|
1320 | | - - :c:func:`copy_to_user()` |
---|
| 1320 | + - copy_to_user() |
---|
1321 | 1321 | |
---|
1322 | | - - :c:func:`get_user()` |
---|
| 1322 | + - get_user() |
---|
1323 | 1323 | |
---|
1324 | | - - :c:func:`put_user()` |
---|
| 1324 | + - put_user() |
---|
1325 | 1325 | |
---|
1326 | | -- :c:func:`kmalloc(GFP_KERNEL) <kmalloc>` |
---|
| 1326 | +- kmalloc(GP_KERNEL) <kmalloc>` |
---|
1327 | 1327 | |
---|
1328 | | -- :c:func:`mutex_lock_interruptible()` and |
---|
1329 | | - :c:func:`mutex_lock()` |
---|
| 1328 | +- mutex_lock_interruptible() and |
---|
| 1329 | + mutex_lock() |
---|
1330 | 1330 | |
---|
1331 | | - There is a :c:func:`mutex_trylock()` which does not sleep. |
---|
| 1331 | + There is a mutex_trylock() which does not sleep. |
---|
1332 | 1332 | Still, it must not be used inside interrupt context since its |
---|
1333 | | - implementation is not safe for that. :c:func:`mutex_unlock()` |
---|
| 1333 | + implementation is not safe for that. mutex_unlock() |
---|
1334 | 1334 | will also never sleep. It cannot be used in interrupt context either |
---|
1335 | 1335 | since a mutex must be released by the same task that acquired it. |
---|
1336 | 1336 | |
---|
.. | .. |
---|
1340 | 1340 | Some functions are safe to call from any context, or holding almost any |
---|
1341 | 1341 | lock. |
---|
1342 | 1342 | |
---|
1343 | | -- :c:func:`printk()` |
---|
| 1343 | +- printk() |
---|
1344 | 1344 | |
---|
1345 | | -- :c:func:`kfree()` |
---|
| 1345 | +- kfree() |
---|
1346 | 1346 | |
---|
1347 | | -- :c:func:`add_timer()` and :c:func:`del_timer()` |
---|
| 1347 | +- add_timer() and del_timer() |
---|
1348 | 1348 | |
---|
1349 | 1349 | Mutex API reference |
---|
1350 | 1350 | =================== |
---|
.. | .. |
---|
1364 | 1364 | Further reading |
---|
1365 | 1365 | =============== |
---|
1366 | 1366 | |
---|
1367 | | -- ``Documentation/locking/spinlocks.txt``: Linus Torvalds' spinlocking |
---|
| 1367 | +- ``Documentation/locking/spinlocks.rst``: Linus Torvalds' spinlocking |
---|
1368 | 1368 | tutorial in the kernel sources. |
---|
1369 | 1369 | |
---|
1370 | 1370 | - Unix Systems for Modern Architectures: Symmetric Multiprocessing and |
---|
.. | .. |
---|
1400 | 1400 | |
---|
1401 | 1401 | bh |
---|
1402 | 1402 | Bottom Half: for historical reasons, functions with '_bh' in them often |
---|
1403 | | - now refer to any software interrupt, e.g. :c:func:`spin_lock_bh()` |
---|
| 1403 | + now refer to any software interrupt, e.g. spin_lock_bh() |
---|
1404 | 1404 | blocks any software interrupt on the current CPU. Bottom halves are |
---|
1405 | 1405 | deprecated, and will eventually be replaced by tasklets. Only one bottom |
---|
1406 | 1406 | half will be running at any time. |
---|
1407 | 1407 | |
---|
1408 | 1408 | Hardware Interrupt / Hardware IRQ |
---|
1409 | | - Hardware interrupt request. :c:func:`in_irq()` returns true in a |
---|
| 1409 | + Hardware interrupt request. in_irq() returns true in a |
---|
1410 | 1410 | hardware interrupt handler. |
---|
1411 | 1411 | |
---|
1412 | 1412 | Interrupt Context |
---|
1413 | 1413 | Not user context: processing a hardware irq or software irq. Indicated |
---|
1414 | | - by the :c:func:`in_interrupt()` macro returning true. |
---|
| 1414 | + by the in_interrupt() macro returning true. |
---|
1415 | 1415 | |
---|
1416 | 1416 | SMP |
---|
1417 | 1417 | Symmetric Multi-Processor: kernels compiled for multiple-CPU machines. |
---|
1418 | 1418 | (``CONFIG_SMP=y``). |
---|
1419 | 1419 | |
---|
1420 | 1420 | Software Interrupt / softirq |
---|
1421 | | - Software interrupt handler. :c:func:`in_irq()` returns false; |
---|
1422 | | - :c:func:`in_softirq()` returns true. Tasklets and softirqs both |
---|
| 1421 | + Software interrupt handler. in_irq() returns false; |
---|
| 1422 | + in_softirq() returns true. Tasklets and softirqs both |
---|
1423 | 1423 | fall into the category of 'software interrupts'. |
---|
1424 | 1424 | |
---|
1425 | 1425 | Strictly speaking a softirq is one of up to 32 enumerated software |
---|