From 151fecfb72a0d602dfe79790602ef64b4e241574 Mon Sep 17 00:00:00 2001
From: hc <hc@nodka.com>
Date: Mon, 19 Feb 2024 01:51:07 +0000
Subject: [PATCH] export RK_PA3

---
 kernel/Documentation/RCU/rcubarrier.rst |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/Documentation/RCU/rcubarrier.rst b/kernel/Documentation/RCU/rcubarrier.rst
index 3b4a248..f64f441 100644
--- a/kernel/Documentation/RCU/rcubarrier.rst
+++ b/kernel/Documentation/RCU/rcubarrier.rst
@@ -9,7 +9,7 @@
 of as a replacement for read-writer locking (among other things), but with
 very low-overhead readers that are immune to deadlock, priority inversion,
 and unbounded latency. RCU read-side critical sections are delimited
-by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPTION
+by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT
 kernels, generate no code whatsoever.
 
 This means that RCU writers are unaware of the presence of concurrent
@@ -329,10 +329,10 @@
 	to smp_call_function() and further to smp_call_function_on_cpu(),
 	causing this latter to spin until the cross-CPU invocation of
 	rcu_barrier_func() has completed. This by itself would prevent
-	a grace period from completing on non-CONFIG_PREEMPTION kernels,
+	a grace period from completing on non-CONFIG_PREEMPT kernels,
 	since each CPU must undergo a context switch (or other quiescent
 	state) before the grace period can complete. However, this is
-	of no use in CONFIG_PREEMPTION kernels.
+	of no use in CONFIG_PREEMPT kernels.
 
 	Therefore, on_each_cpu() disables preemption across its call
 	to smp_call_function() and also across the local call to

--
Gitblit v1.6.2