hc
2023-12-11 6778948f9de86c3cfaf36725a7c87dcff9ba247f
kernel/Documentation/RCU/rcubarrier.rst
....@@ -9,7 +9,7 @@
99 of as a replacement for read-writer locking (among other things), but with
1010 very low-overhead readers that are immune to deadlock, priority inversion,
1111 and unbounded latency. RCU read-side critical sections are delimited
12
-by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPTION
12
+by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT
1313 kernels, generate no code whatsoever.
1414
1515 This means that RCU writers are unaware of the presence of concurrent
....@@ -329,10 +329,10 @@
329329 to smp_call_function() and further to smp_call_function_on_cpu(),
330330 causing this latter to spin until the cross-CPU invocation of
331331 rcu_barrier_func() has completed. This by itself would prevent
332
- a grace period from completing on non-CONFIG_PREEMPTION kernels,
332
+ a grace period from completing on non-CONFIG_PREEMPT kernels,
333333 since each CPU must undergo a context switch (or other quiescent
334334 state) before the grace period can complete. However, this is
335
- of no use in CONFIG_PREEMPTION kernels.
335
+ of no use in CONFIG_PREEMPT kernels.
336336
337337 Therefore, on_each_cpu() disables preemption across its call
338338 to smp_call_function() and also across the local call to