hc
2023-12-11 1f93a7dfd1f8d5ff7a5c53246c7534fe2332d6f4
kernel/Documentation/kernel-hacking/locking.rst
....@@ -150,17 +150,17 @@
150150 If you have a data structure which is only ever accessed from user
151151 context, then you can use a simple mutex (``include/linux/mutex.h``) to
152152 protect it. This is the most trivial case: you initialize the mutex.
153
-Then you can call :c:func:`mutex_lock_interruptible()` to grab the
154
-mutex, and :c:func:`mutex_unlock()` to release it. There is also a
155
-:c:func:`mutex_lock()`, which should be avoided, because it will
153
+Then you can call mutex_lock_interruptible() to grab the
154
+mutex, and mutex_unlock() to release it. There is also a
155
+mutex_lock(), which should be avoided, because it will
156156 not return if a signal is received.
157157
158158 Example: ``net/netfilter/nf_sockopt.c`` allows registration of new
159
-:c:func:`setsockopt()` and :c:func:`getsockopt()` calls, with
160
-:c:func:`nf_register_sockopt()`. Registration and de-registration
159
+setsockopt() and getsockopt() calls, with
160
+nf_register_sockopt(). Registration and de-registration
161161 are only done on module load and unload (and boot time, where there is
162162 no concurrency), and the list of registrations is only consulted for an
163
-unknown :c:func:`setsockopt()` or :c:func:`getsockopt()` system
163
+unknown setsockopt() or getsockopt() system
164164 call. The ``nf_sockopt_mutex`` is perfect to protect this, especially
165165 since the setsockopt and getsockopt calls may well sleep.
166166
....@@ -170,19 +170,19 @@
170170 If a softirq shares data with user context, you have two problems.
171171 Firstly, the current user context can be interrupted by a softirq, and
172172 secondly, the critical region could be entered from another CPU. This is
173
-where :c:func:`spin_lock_bh()` (``include/linux/spinlock.h``) is
173
+where spin_lock_bh() (``include/linux/spinlock.h``) is
174174 used. It disables softirqs on that CPU, then grabs the lock.
175
-:c:func:`spin_unlock_bh()` does the reverse. (The '_bh' suffix is
175
+spin_unlock_bh() does the reverse. (The '_bh' suffix is
176176 a historical reference to "Bottom Halves", the old name for software
177177 interrupts. It should really be called spin_lock_softirq()' in a
178178 perfect world).
179179
180
-Note that you can also use :c:func:`spin_lock_irq()` or
181
-:c:func:`spin_lock_irqsave()` here, which stop hardware interrupts
180
+Note that you can also use spin_lock_irq() or
181
+spin_lock_irqsave() here, which stop hardware interrupts
182182 as well: see `Hard IRQ Context <#hard-irq-context>`__.
183183
184184 This works perfectly for UP as well: the spin lock vanishes, and this
185
-macro simply becomes :c:func:`local_bh_disable()`
185
+macro simply becomes local_bh_disable()
186186 (``include/linux/interrupt.h``), which protects you from the softirq
187187 being run.
188188
....@@ -216,8 +216,8 @@
216216 ~~~~~~~~~~~~~~~~~~~~~~~~~
217217
218218 If another tasklet/timer wants to share data with your tasklet or timer
219
-, you will both need to use :c:func:`spin_lock()` and
220
-:c:func:`spin_unlock()` calls. :c:func:`spin_lock_bh()` is
219
+, you will both need to use spin_lock() and
220
+spin_unlock() calls. spin_lock_bh() is
221221 unnecessary here, as you are already in a tasklet, and none will be run
222222 on the same CPU.
223223
....@@ -234,14 +234,14 @@
234234 going so far as to use a softirq, you probably care about scalable
235235 performance enough to justify the extra complexity.
236236
237
-You'll need to use :c:func:`spin_lock()` and
238
-:c:func:`spin_unlock()` for shared data.
237
+You'll need to use spin_lock() and
238
+spin_unlock() for shared data.
239239
240240 Different Softirqs
241241 ~~~~~~~~~~~~~~~~~~
242242
243
-You'll need to use :c:func:`spin_lock()` and
244
-:c:func:`spin_unlock()` for shared data, whether it be a timer,
243
+You'll need to use spin_lock() and
244
+spin_unlock() for shared data, whether it be a timer,
245245 tasklet, different softirq or the same or another softirq: any of them
246246 could be running on a different CPU.
247247
....@@ -259,38 +259,38 @@
259259 concerns. Firstly, the softirq processing can be interrupted by a
260260 hardware interrupt, and secondly, the critical region could be entered
261261 by a hardware interrupt on another CPU. This is where
262
-:c:func:`spin_lock_irq()` is used. It is defined to disable
262
+spin_lock_irq() is used. It is defined to disable
263263 interrupts on that cpu, then grab the lock.
264
-:c:func:`spin_unlock_irq()` does the reverse.
264
+spin_unlock_irq() does the reverse.
265265
266
-The irq handler does not to use :c:func:`spin_lock_irq()`, because
266
+The irq handler does not need to use spin_lock_irq(), because
267267 the softirq cannot run while the irq handler is running: it can use
268
-:c:func:`spin_lock()`, which is slightly faster. The only exception
268
+spin_lock(), which is slightly faster. The only exception
269269 would be if a different hardware irq handler uses the same lock:
270
-:c:func:`spin_lock_irq()` will stop that from interrupting us.
270
+spin_lock_irq() will stop that from interrupting us.
271271
272272 This works perfectly for UP as well: the spin lock vanishes, and this
273
-macro simply becomes :c:func:`local_irq_disable()`
273
+macro simply becomes local_irq_disable()
274274 (``include/asm/smp.h``), which protects you from the softirq/tasklet/BH
275275 being run.
276276
277
-:c:func:`spin_lock_irqsave()` (``include/linux/spinlock.h``) is a
277
+spin_lock_irqsave() (``include/linux/spinlock.h``) is a
278278 variant which saves whether interrupts were on or off in a flags word,
279
-which is passed to :c:func:`spin_unlock_irqrestore()`. This means
279
+which is passed to spin_unlock_irqrestore(). This means
280280 that the same code can be used inside an hard irq handler (where
281281 interrupts are already off) and in softirqs (where the irq disabling is
282282 required).
283283
284284 Note that softirqs (and hence tasklets and timers) are run on return
285
-from hardware interrupts, so :c:func:`spin_lock_irq()` also stops
286
-these. In that sense, :c:func:`spin_lock_irqsave()` is the most
285
+from hardware interrupts, so spin_lock_irq() also stops
286
+these. In that sense, spin_lock_irqsave() is the most
287287 general and powerful locking function.
288288
289289 Locking Between Two Hard IRQ Handlers
290290 -------------------------------------
291291
292292 It is rare to have to share data between two IRQ handlers, but if you
293
-do, :c:func:`spin_lock_irqsave()` should be used: it is
293
+do, spin_lock_irqsave() should be used: it is
294294 architecture-specific whether all interrupts are disabled inside irq
295295 handlers themselves.
296296
....@@ -304,11 +304,11 @@
304304 (``copy_from_user*(`` or ``kmalloc(x,GFP_KERNEL)``).
305305
306306 - Otherwise (== data can be touched in an interrupt), use
307
- :c:func:`spin_lock_irqsave()` and
308
- :c:func:`spin_unlock_irqrestore()`.
307
+ spin_lock_irqsave() and
308
+ spin_unlock_irqrestore().
309309
310310 - Avoid holding spinlock for more than 5 lines of code and across any
311
- function call (except accessors like :c:func:`readb()`).
311
+ function call (except accessors like readb()).
312312
313313 Table of Minimum Requirements
314314 -----------------------------
....@@ -320,7 +320,7 @@
320320 shares data with another thread, locking is required).
321321
322322 Remember the advice above: you can always use
323
-:c:func:`spin_lock_irqsave()`, which is a superset of all other
323
+spin_lock_irqsave(), which is a superset of all other
324324 spinlock primitives.
325325
326326 ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
....@@ -363,13 +363,13 @@
363363 lock when some other thread is holding the lock. You should acquire the
364364 lock later if you then need access to the data protected with the lock.
365365
366
-:c:func:`spin_trylock()` does not spin but returns non-zero if it
366
+spin_trylock() does not spin but returns non-zero if it
367367 acquires the spinlock on the first try or 0 if not. This function can be
368
-used in all contexts like :c:func:`spin_lock()`: you must have
368
+used in all contexts like spin_lock(): you must have
369369 disabled the contexts that might interrupt you and acquire the spin
370370 lock.
371371
372
-:c:func:`mutex_trylock()` does not suspend your task but returns
372
+mutex_trylock() does not suspend your task but returns
373373 non-zero if it could lock the mutex on the first try or 0 if not. This
374374 function cannot be safely used in hardware or software interrupt
375375 contexts despite not sleeping.
....@@ -451,7 +451,7 @@
451451 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
452452 return -ENOMEM;
453453
454
- strlcpy(obj->name, name, sizeof(obj->name));
454
+ strscpy(obj->name, name, sizeof(obj->name));
455455 obj->id = id;
456456 obj->popularity = 0;
457457
....@@ -490,14 +490,14 @@
490490 objects directly.
491491
492492 There is a slight (and common) optimization here: in
493
-:c:func:`cache_add()` we set up the fields of the object before
493
+cache_add() we set up the fields of the object before
494494 grabbing the lock. This is safe, as no-one else can access it until we
495495 put it in cache.
496496
497497 Accessing From Interrupt Context
498498 --------------------------------
499499
500
-Now consider the case where :c:func:`cache_find()` can be called
500
+Now consider the case where cache_find() can be called
501501 from interrupt context: either a hardware interrupt or a softirq. An
502502 example would be a timer which deletes object from the cache.
503503
....@@ -566,16 +566,16 @@
566566 return ret;
567567 }
568568
569
-Note that the :c:func:`spin_lock_irqsave()` will turn off
569
+Note that the spin_lock_irqsave() will turn off
570570 interrupts if they are on, otherwise does nothing (if we are already in
571571 an interrupt handler), hence these functions are safe to call from any
572572 context.
573573
574
-Unfortunately, :c:func:`cache_add()` calls :c:func:`kmalloc()`
574
+Unfortunately, cache_add() calls kmalloc()
575575 with the ``GFP_KERNEL`` flag, which is only legal in user context. I
576
-have assumed that :c:func:`cache_add()` is still only called in
576
+have assumed that cache_add() is still only called in
577577 user context, otherwise this should become a parameter to
578
-:c:func:`cache_add()`.
578
+cache_add().
579579
580580 Exposing Objects Outside This File
581581 ----------------------------------
....@@ -592,7 +592,7 @@
592592 The second problem is the lifetime problem: if another structure keeps a
593593 pointer to an object, it presumably expects that pointer to remain
594594 valid. Unfortunately, this is only guaranteed while you hold the lock,
595
-otherwise someone might call :c:func:`cache_delete()` and even
595
+otherwise someone might call cache_delete() and even
596596 worse, add another object, re-using the same address.
597597
598598 As there is only one lock, you can't hold it forever: no-one else would
....@@ -660,7 +660,7 @@
660660 }
661661
662662 @@ -63,6 +94,7 @@
663
- strlcpy(obj->name, name, sizeof(obj->name));
663
+ strscpy(obj->name, name, sizeof(obj->name));
664664 obj->id = id;
665665 obj->popularity = 0;
666666 + obj->refcnt = 1; /* The cache holds a reference */
....@@ -693,8 +693,8 @@
693693
694694 We encapsulate the reference counting in the standard 'get' and 'put'
695695 functions. Now we can return the object itself from
696
-:c:func:`cache_find()` which has the advantage that the user can
697
-now sleep holding the object (eg. to :c:func:`copy_to_user()` to
696
+cache_find() which has the advantage that the user can
697
+now sleep holding the object (eg. to copy_to_user() to
698698 name to userspace).
699699
700700 The other point to note is that I said a reference should be held for
....@@ -710,7 +710,7 @@
710710 are guaranteed to be seen atomically from all CPUs in the system, so no
711711 lock is required. In this case, it is simpler than using spinlocks,
712712 although for anything non-trivial using spinlocks is clearer. The
713
-:c:func:`atomic_inc()` and :c:func:`atomic_dec_and_test()`
713
+atomic_inc() and atomic_dec_and_test()
714714 are used instead of the standard increment and decrement operators, and
715715 the lock is no longer used to protect the reference count itself.
716716
....@@ -774,7 +774,7 @@
774774 }
775775
776776 @@ -94,7 +76,7 @@
777
- strlcpy(obj->name, name, sizeof(obj->name));
777
+ strscpy(obj->name, name, sizeof(obj->name));
778778 obj->id = id;
779779 obj->popularity = 0;
780780 - obj->refcnt = 1; /* The cache holds a reference */
....@@ -802,7 +802,7 @@
802802 - You can make ``cache_lock`` non-static, and tell people to grab that
803803 lock before changing the name in any object.
804804
805
-- You can provide a :c:func:`cache_obj_rename()` which grabs this
805
+- You can provide a cache_obj_rename() which grabs this
806806 lock and changes the name for the caller, and tell everyone to use
807807 that function.
808808
....@@ -861,11 +861,11 @@
861861 ``cache_lock`` rather than the per-object lock: this is because it (like
862862 the :c:type:`struct list_head <list_head>` inside the object)
863863 is logically part of the infrastructure. This way, I don't need to grab
864
-the lock of every object in :c:func:`__cache_add()` when seeking
864
+the lock of every object in __cache_add() when seeking
865865 the least popular.
866866
867867 I also decided that the id member is unchangeable, so I don't need to
868
-grab each object lock in :c:func:`__cache_find()` to examine the
868
+grab each object lock in __cache_find() to examine the
869869 id: the object lock is only used by a caller who wants to read or write
870870 the name field.
871871
....@@ -887,7 +887,7 @@
887887 stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem.
888888
889889 For a slightly more complex case, imagine you have a region shared by a
890
-softirq and user context. If you use a :c:func:`spin_lock()` call
890
+softirq and user context. If you use a spin_lock() call
891891 to protect it, it is possible that the user context will be interrupted
892892 by the softirq while it holds the lock, and the softirq will then spin
893893 forever trying to get the same lock.
....@@ -985,12 +985,12 @@
985985
986986
987987 Sooner or later, this will crash on SMP, because a timer can have just
988
-gone off before the :c:func:`spin_lock_bh()`, and it will only get
989
-the lock after we :c:func:`spin_unlock_bh()`, and then try to free
988
+gone off before the spin_lock_bh(), and it will only get
989
+the lock after we spin_unlock_bh(), and then try to free
990990 the element (which has already been freed!).
991991
992992 This can be avoided by checking the result of
993
-:c:func:`del_timer()`: if it returns 1, the timer has been deleted.
993
+del_timer(): if it returns 1, the timer has been deleted.
994994 If 0, it means (in this case) that it is currently running, so we can
995995 do::
996996
....@@ -1012,9 +1012,9 @@
10121012
10131013
10141014 Another common problem is deleting timers which restart themselves (by
1015
-calling :c:func:`add_timer()` at the end of their timer function).
1015
+calling add_timer() at the end of their timer function).
10161016 Because this is a fairly common case which is prone to races, you should
1017
-use :c:func:`del_timer_sync()` (``include/linux/timer.h``) to
1017
+use del_timer_sync() (``include/linux/timer.h``) to
10181018 handle this case. It returns the number of times the timer had to be
10191019 deleted before we finally stopped it from adding itself back in.
10201020
....@@ -1086,7 +1086,7 @@
10861086 list->next = new;
10871087
10881088
1089
-The :c:func:`wmb()` is a write memory barrier. It ensures that the
1089
+The wmb() is a write memory barrier. It ensures that the
10901090 first operation (setting the new element's ``next`` pointer) is complete
10911091 and will be seen by all CPUs, before the second operation is (putting
10921092 the new element into the list). This is important, since modern
....@@ -1097,7 +1097,7 @@
10971097
10981098 Fortunately, there is a function to do this for standard
10991099 :c:type:`struct list_head <list_head>` lists:
1100
-:c:func:`list_add_rcu()` (``include/linux/list.h``).
1100
+list_add_rcu() (``include/linux/list.h``).
11011101
11021102 Removing an element from the list is even simpler: we replace the
11031103 pointer to the old element with a pointer to its successor, and readers
....@@ -1108,7 +1108,7 @@
11081108 list->next = old->next;
11091109
11101110
1111
-There is :c:func:`list_del_rcu()` (``include/linux/list.h``) which
1111
+There is list_del_rcu() (``include/linux/list.h``) which
11121112 does this (the normal version poisons the old object, which we don't
11131113 want).
11141114
....@@ -1116,9 +1116,9 @@
11161116 pointer to start reading the contents of the next element early, but
11171117 don't realize that the pre-fetched contents is wrong when the ``next``
11181118 pointer changes underneath them. Once again, there is a
1119
-:c:func:`list_for_each_entry_rcu()` (``include/linux/list.h``)
1119
+list_for_each_entry_rcu() (``include/linux/list.h``)
11201120 to help you. Of course, writers can just use
1121
-:c:func:`list_for_each_entry()`, since there cannot be two
1121
+list_for_each_entry(), since there cannot be two
11221122 simultaneous writers.
11231123
11241124 Our final dilemma is this: when can we actually destroy the removed
....@@ -1127,14 +1127,14 @@
11271127 changes, the reader will jump off into garbage and crash. We need to
11281128 wait until we know that all the readers who were traversing the list
11291129 when we deleted the element are finished. We use
1130
-:c:func:`call_rcu()` to register a callback which will actually
1130
+call_rcu() to register a callback which will actually
11311131 destroy the object once all pre-existing readers are finished.
1132
-Alternatively, :c:func:`synchronize_rcu()` may be used to block
1132
+Alternatively, synchronize_rcu() may be used to block
11331133 until all pre-existing are finished.
11341134
11351135 But how does Read Copy Update know when the readers are finished? The
11361136 method is this: firstly, the readers always traverse the list inside
1137
-:c:func:`rcu_read_lock()`/:c:func:`rcu_read_unlock()` pairs:
1137
+rcu_read_lock()/rcu_read_unlock() pairs:
11381138 these simply disable preemption so the reader won't go to sleep while
11391139 reading the list.
11401140
....@@ -1223,12 +1223,12 @@
12231223 }
12241224
12251225 Note that the reader will alter the popularity member in
1226
-:c:func:`__cache_find()`, and now it doesn't hold a lock. One
1226
+__cache_find(), and now it doesn't hold a lock. One
12271227 solution would be to make it an ``atomic_t``, but for this usage, we
12281228 don't really care about races: an approximate result is good enough, so
12291229 I didn't change it.
12301230
1231
-The result is that :c:func:`cache_find()` requires no
1231
+The result is that cache_find() requires no
12321232 synchronization with any other functions, so is almost as fast on SMP as
12331233 it would be on UP.
12341234
....@@ -1240,9 +1240,9 @@
12401240
12411241 Now, because the 'read lock' in RCU is simply disabling preemption, a
12421242 caller which always has preemption disabled between calling
1243
-:c:func:`cache_find()` and :c:func:`object_put()` does not
1243
+cache_find() and object_put() does not
12441244 need to actually get and put the reference count: we could expose
1245
-:c:func:`__cache_find()` by making it non-static, and such
1245
+__cache_find() by making it non-static, and such
12461246 callers could simply call that.
12471247
12481248 The benefit here is that the reference count is not written to: the
....@@ -1260,11 +1260,11 @@
12601260 If that was too slow (it's usually not, but if you've got a really big
12611261 machine to test on and can show that it is), you could instead use a
12621262 counter for each CPU, then none of them need an exclusive lock. See
1263
-:c:func:`DEFINE_PER_CPU()`, :c:func:`get_cpu_var()` and
1264
-:c:func:`put_cpu_var()` (``include/linux/percpu.h``).
1263
+DEFINE_PER_CPU(), get_cpu_var() and
1264
+put_cpu_var() (``include/linux/percpu.h``).
12651265
12661266 Of particular use for simple per-cpu counters is the ``local_t`` type,
1267
-and the :c:func:`cpu_local_inc()` and related functions, which are
1267
+and the cpu_local_inc() and related functions, which are
12681268 more efficient than simple code on some architectures
12691269 (``include/asm/local.h``).
12701270
....@@ -1289,10 +1289,10 @@
12891289 enable_irq(irq);
12901290 spin_unlock(&lock);
12911291
1292
-The :c:func:`disable_irq()` prevents the irq handler from running
1292
+The disable_irq() prevents the irq handler from running
12931293 (and waits for it to finish if it's currently running on other CPUs).
12941294 The spinlock prevents any other accesses happening at the same time.
1295
-Naturally, this is slower than just a :c:func:`spin_lock_irq()`
1295
+Naturally, this is slower than just a spin_lock_irq()
12961296 call, so it only makes sense if this type of access happens extremely
12971297 rarely.
12981298
....@@ -1315,22 +1315,22 @@
13151315
13161316 - Accesses to userspace:
13171317
1318
- - :c:func:`copy_from_user()`
1318
+ - copy_from_user()
13191319
1320
- - :c:func:`copy_to_user()`
1320
+ - copy_to_user()
13211321
1322
- - :c:func:`get_user()`
1322
+ - get_user()
13231323
1324
- - :c:func:`put_user()`
1324
+ - put_user()
13251325
1326
-- :c:func:`kmalloc(GFP_KERNEL) <kmalloc>`
1326
+- kmalloc(GP_KERNEL) <kmalloc>`
13271327
1328
-- :c:func:`mutex_lock_interruptible()` and
1329
- :c:func:`mutex_lock()`
1328
+- mutex_lock_interruptible() and
1329
+ mutex_lock()
13301330
1331
- There is a :c:func:`mutex_trylock()` which does not sleep.
1331
+ There is a mutex_trylock() which does not sleep.
13321332 Still, it must not be used inside interrupt context since its
1333
- implementation is not safe for that. :c:func:`mutex_unlock()`
1333
+ implementation is not safe for that. mutex_unlock()
13341334 will also never sleep. It cannot be used in interrupt context either
13351335 since a mutex must be released by the same task that acquired it.
13361336
....@@ -1340,11 +1340,11 @@
13401340 Some functions are safe to call from any context, or holding almost any
13411341 lock.
13421342
1343
-- :c:func:`printk()`
1343
+- printk()
13441344
1345
-- :c:func:`kfree()`
1345
+- kfree()
13461346
1347
-- :c:func:`add_timer()` and :c:func:`del_timer()`
1347
+- add_timer() and del_timer()
13481348
13491349 Mutex API reference
13501350 ===================
....@@ -1364,7 +1364,7 @@
13641364 Further reading
13651365 ===============
13661366
1367
-- ``Documentation/locking/spinlocks.txt``: Linus Torvalds' spinlocking
1367
+- ``Documentation/locking/spinlocks.rst``: Linus Torvalds' spinlocking
13681368 tutorial in the kernel sources.
13691369
13701370 - Unix Systems for Modern Architectures: Symmetric Multiprocessing and
....@@ -1400,26 +1400,26 @@
14001400
14011401 bh
14021402 Bottom Half: for historical reasons, functions with '_bh' in them often
1403
- now refer to any software interrupt, e.g. :c:func:`spin_lock_bh()`
1403
+ now refer to any software interrupt, e.g. spin_lock_bh()
14041404 blocks any software interrupt on the current CPU. Bottom halves are
14051405 deprecated, and will eventually be replaced by tasklets. Only one bottom
14061406 half will be running at any time.
14071407
14081408 Hardware Interrupt / Hardware IRQ
1409
- Hardware interrupt request. :c:func:`in_irq()` returns true in a
1409
+ Hardware interrupt request. in_irq() returns true in a
14101410 hardware interrupt handler.
14111411
14121412 Interrupt Context
14131413 Not user context: processing a hardware irq or software irq. Indicated
1414
- by the :c:func:`in_interrupt()` macro returning true.
1414
+ by the in_interrupt() macro returning true.
14151415
14161416 SMP
14171417 Symmetric Multi-Processor: kernels compiled for multiple-CPU machines.
14181418 (``CONFIG_SMP=y``).
14191419
14201420 Software Interrupt / softirq
1421
- Software interrupt handler. :c:func:`in_irq()` returns false;
1422
- :c:func:`in_softirq()` returns true. Tasklets and softirqs both
1421
+ Software interrupt handler. in_irq() returns false;
1422
+ in_softirq() returns true. Tasklets and softirqs both
14231423 fall into the category of 'software interrupts'.
14241424
14251425 Strictly speaking a softirq is one of up to 32 enumerated software