From 1543e317f1da31b75942316931e8f491a8920811 Mon Sep 17 00:00:00 2001
From: hc <hc@nodka.com>
Date: Thu, 04 Jan 2024 10:08:02 +0000
Subject: [PATCH] disable FB

---
 kernel/Documentation/RCU/Design/Requirements/Requirements.rst |   26 +++++++++++++-------------
 1 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/kernel/Documentation/RCU/Design/Requirements/Requirements.rst b/kernel/Documentation/RCU/Design/Requirements/Requirements.rst
index 17d3848..1ae79a1 100644
--- a/kernel/Documentation/RCU/Design/Requirements/Requirements.rst
+++ b/kernel/Documentation/RCU/Design/Requirements/Requirements.rst
@@ -78,7 +78,7 @@
 Production-quality implementations of ``rcu_read_lock()`` and
 ``rcu_read_unlock()`` are extremely lightweight, and in fact have
 exactly zero overhead in Linux kernels built for production use with
-``CONFIG_PREEMPTION=n``.
+``CONFIG_PREEMPT=n``.
 
 This guarantee allows ordering to be enforced with extremely low
 overhead to readers, for example:
@@ -1182,7 +1182,7 @@
 costs have plummeted. However, as I learned from Matt Mackall's
 `bloatwatch <http://elinux.org/Linux_Tiny-FAQ>`__ efforts, memory
 footprint is critically important on single-CPU systems with
-non-preemptible (``CONFIG_PREEMPTION=n``) kernels, and thus `tiny
+non-preemptible (``CONFIG_PREEMPT=n``) kernels, and thus `tiny
 RCU <https://lkml.kernel.org/g/20090113221724.GA15307@linux.vnet.ibm.com>`__
 was born. Josh Triplett has since taken over the small-memory banner
 with his `Linux kernel tinification <https://tiny.wiki.kernel.org/>`__
@@ -1498,7 +1498,7 @@
 
 Implementations of RCU for which ``rcu_read_lock()`` and
 ``rcu_read_unlock()`` generate no code, such as Linux-kernel RCU when
-``CONFIG_PREEMPTION=n``, can be nested arbitrarily deeply. After all, there
+``CONFIG_PREEMPT=n``, can be nested arbitrarily deeply. After all, there
 is no overhead. Except that if all these instances of
 ``rcu_read_lock()`` and ``rcu_read_unlock()`` are visible to the
 compiler, compilation will eventually fail due to exhausting memory,
@@ -1771,7 +1771,7 @@
 
 However, once the scheduler has spawned its first kthread, this early
 boot trick fails for ``synchronize_rcu()`` (as well as for
-``synchronize_rcu_expedited()``) in ``CONFIG_PREEMPTION=y`` kernels. The
+``synchronize_rcu_expedited()``) in ``CONFIG_PREEMPT=y`` kernels. The
 reason is that an RCU read-side critical section might be preempted,
 which means that a subsequent ``synchronize_rcu()`` really does have to
 wait for something, as opposed to simply returning immediately.
@@ -2010,7 +2010,7 @@
        5 rcu_read_unlock();
        6 do_something_with(v, user_v);
 
-If the compiler did make this transformation in a ``CONFIG_PREEMPTION=n`` kernel
+If the compiler did make this transformation in a ``CONFIG_PREEMPT=n`` kernel
 build, and if ``get_user()`` did page fault, the result would be a quiescent
 state in the middle of an RCU read-side critical section.  This misplaced
 quiescent state could result in line 4 being a use-after-free access,
@@ -2289,10 +2289,10 @@
 
 The Linux kernel is used for real-time workloads, especially in
 conjunction with the `-rt
-patchset <https://wiki.linuxfoundation.org/realtime/>`__. The
+patchset <https://rt.wiki.kernel.org/index.php/Main_Page>`__. The
 real-time-latency response requirements are such that the traditional
 approach of disabling preemption across RCU read-side critical sections
-is inappropriate. Kernels built with ``CONFIG_PREEMPTION=y`` therefore use
+is inappropriate. Kernels built with ``CONFIG_PREEMPT=y`` therefore use
 an RCU implementation that allows RCU read-side critical sections to be
 preempted. This requirement made its presence known after users made it
 clear that an earlier `real-time
@@ -2414,7 +2414,7 @@
 ``call_rcu_bh()``, ``rcu_barrier_bh()``, and
 ``rcu_read_lock_bh_held()``. However, the update-side APIs are now
 simple wrappers for other RCU flavors, namely RCU-sched in
-CONFIG_PREEMPTION=n kernels and RCU-preempt otherwise.
+CONFIG_PREEMPT=n kernels and RCU-preempt otherwise.
 
 Sched Flavor (Historical)
 ~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -2432,11 +2432,11 @@
 RCU read-side critical section can be a quiescent state. Therefore,
 *RCU-sched* was created, which follows “classic” RCU in that an
 RCU-sched grace period waits for pre-existing interrupt and NMI
-handlers. In kernels built with ``CONFIG_PREEMPTION=n``, the RCU and
+handlers. In kernels built with ``CONFIG_PREEMPT=n``, the RCU and
 RCU-sched APIs have identical implementations, while kernels built with
-``CONFIG_PREEMPTION=y`` provide a separate implementation for each.
+``CONFIG_PREEMPT=y`` provide a separate implementation for each.
 
-Note well that in ``CONFIG_PREEMPTION=y`` kernels,
+Note well that in ``CONFIG_PREEMPT=y`` kernels,
 ``rcu_read_lock_sched()`` and ``rcu_read_unlock_sched()`` disable and
 re-enable preemption, respectively. This means that if there was a
 preemption attempt during the RCU-sched read-side critical section,
@@ -2599,10 +2599,10 @@
 
 The tasks-RCU API is quite compact, consisting only of
 ``call_rcu_tasks()``, ``synchronize_rcu_tasks()``, and
-``rcu_barrier_tasks()``. In ``CONFIG_PREEMPTION=n`` kernels, trampolines
+``rcu_barrier_tasks()``. In ``CONFIG_PREEMPT=n`` kernels, trampolines
 cannot be preempted, so these APIs map to ``call_rcu()``,
 ``synchronize_rcu()``, and ``rcu_barrier()``, respectively. In
-``CONFIG_PREEMPTION=y`` kernels, trampolines can be preempted, and these
+``CONFIG_PREEMPT=y`` kernels, trampolines can be preempted, and these
 three APIs are therefore implemented by separate functions that check
 for voluntary context switches.
 

--
Gitblit v1.6.2