hc
2023-12-11 6778948f9de86c3cfaf36725a7c87dcff9ba247f
kernel/Documentation/RCU/Design/Requirements/Requirements.rst
....@@ -78,7 +78,7 @@
7878 Production-quality implementations of ``rcu_read_lock()`` and
7979 ``rcu_read_unlock()`` are extremely lightweight, and in fact have
8080 exactly zero overhead in Linux kernels built for production use with
81
-``CONFIG_PREEMPTION=n``.
81
+``CONFIG_PREEMPT=n``.
8282
8383 This guarantee allows ordering to be enforced with extremely low
8484 overhead to readers, for example:
....@@ -1182,7 +1182,7 @@
11821182 costs have plummeted. However, as I learned from Matt Mackall's
11831183 `bloatwatch <http://elinux.org/Linux_Tiny-FAQ>`__ efforts, memory
11841184 footprint is critically important on single-CPU systems with
1185
-non-preemptible (``CONFIG_PREEMPTION=n``) kernels, and thus `tiny
1185
+non-preemptible (``CONFIG_PREEMPT=n``) kernels, and thus `tiny
11861186 RCU <https://lkml.kernel.org/g/20090113221724.GA15307@linux.vnet.ibm.com>`__
11871187 was born. Josh Triplett has since taken over the small-memory banner
11881188 with his `Linux kernel tinification <https://tiny.wiki.kernel.org/>`__
....@@ -1498,7 +1498,7 @@
14981498
14991499 Implementations of RCU for which ``rcu_read_lock()`` and
15001500 ``rcu_read_unlock()`` generate no code, such as Linux-kernel RCU when
1501
-``CONFIG_PREEMPTION=n``, can be nested arbitrarily deeply. After all, there
1501
+``CONFIG_PREEMPT=n``, can be nested arbitrarily deeply. After all, there
15021502 is no overhead. Except that if all these instances of
15031503 ``rcu_read_lock()`` and ``rcu_read_unlock()`` are visible to the
15041504 compiler, compilation will eventually fail due to exhausting memory,
....@@ -1771,7 +1771,7 @@
17711771
17721772 However, once the scheduler has spawned its first kthread, this early
17731773 boot trick fails for ``synchronize_rcu()`` (as well as for
1774
-``synchronize_rcu_expedited()``) in ``CONFIG_PREEMPTION=y`` kernels. The
1774
+``synchronize_rcu_expedited()``) in ``CONFIG_PREEMPT=y`` kernels. The
17751775 reason is that an RCU read-side critical section might be preempted,
17761776 which means that a subsequent ``synchronize_rcu()`` really does have to
17771777 wait for something, as opposed to simply returning immediately.
....@@ -2010,7 +2010,7 @@
20102010 5 rcu_read_unlock();
20112011 6 do_something_with(v, user_v);
20122012
2013
-If the compiler did make this transformation in a ``CONFIG_PREEMPTION=n`` kernel
2013
+If the compiler did make this transformation in a ``CONFIG_PREEMPT=n`` kernel
20142014 build, and if ``get_user()`` did page fault, the result would be a quiescent
20152015 state in the middle of an RCU read-side critical section. This misplaced
20162016 quiescent state could result in line 4 being a use-after-free access,
....@@ -2289,10 +2289,10 @@
22892289
22902290 The Linux kernel is used for real-time workloads, especially in
22912291 conjunction with the `-rt
2292
-patchset <https://wiki.linuxfoundation.org/realtime/>`__. The
2292
+patchset <https://rt.wiki.kernel.org/index.php/Main_Page>`__. The
22932293 real-time-latency response requirements are such that the traditional
22942294 approach of disabling preemption across RCU read-side critical sections
2295
-is inappropriate. Kernels built with ``CONFIG_PREEMPTION=y`` therefore use
2295
+is inappropriate. Kernels built with ``CONFIG_PREEMPT=y`` therefore use
22962296 an RCU implementation that allows RCU read-side critical sections to be
22972297 preempted. This requirement made its presence known after users made it
22982298 clear that an earlier `real-time
....@@ -2414,7 +2414,7 @@
24142414 ``call_rcu_bh()``, ``rcu_barrier_bh()``, and
24152415 ``rcu_read_lock_bh_held()``. However, the update-side APIs are now
24162416 simple wrappers for other RCU flavors, namely RCU-sched in
2417
-CONFIG_PREEMPTION=n kernels and RCU-preempt otherwise.
2417
+CONFIG_PREEMPT=n kernels and RCU-preempt otherwise.
24182418
24192419 Sched Flavor (Historical)
24202420 ~~~~~~~~~~~~~~~~~~~~~~~~~
....@@ -2432,11 +2432,11 @@
24322432 RCU read-side critical section can be a quiescent state. Therefore,
24332433 *RCU-sched* was created, which follows “classic” RCU in that an
24342434 RCU-sched grace period waits for pre-existing interrupt and NMI
2435
-handlers. In kernels built with ``CONFIG_PREEMPTION=n``, the RCU and
2435
+handlers. In kernels built with ``CONFIG_PREEMPT=n``, the RCU and
24362436 RCU-sched APIs have identical implementations, while kernels built with
2437
-``CONFIG_PREEMPTION=y`` provide a separate implementation for each.
2437
+``CONFIG_PREEMPT=y`` provide a separate implementation for each.
24382438
2439
-Note well that in ``CONFIG_PREEMPTION=y`` kernels,
2439
+Note well that in ``CONFIG_PREEMPT=y`` kernels,
24402440 ``rcu_read_lock_sched()`` and ``rcu_read_unlock_sched()`` disable and
24412441 re-enable preemption, respectively. This means that if there was a
24422442 preemption attempt during the RCU-sched read-side critical section,
....@@ -2599,10 +2599,10 @@
25992599
26002600 The tasks-RCU API is quite compact, consisting only of
26012601 ``call_rcu_tasks()``, ``synchronize_rcu_tasks()``, and
2602
-``rcu_barrier_tasks()``. In ``CONFIG_PREEMPTION=n`` kernels, trampolines
2602
+``rcu_barrier_tasks()``. In ``CONFIG_PREEMPT=n`` kernels, trampolines
26032603 cannot be preempted, so these APIs map to ``call_rcu()``,
26042604 ``synchronize_rcu()``, and ``rcu_barrier()``, respectively. In
2605
-``CONFIG_PREEMPTION=y`` kernels, trampolines can be preempted, and these
2605
+``CONFIG_PREEMPT=y`` kernels, trampolines can be preempted, and these
26062606 three APIs are therefore implemented by separate functions that check
26072607 for voluntary context switches.
26082608