.. | .. |
---|
9 | 9 | of as a replacement for read-writer locking (among other things), but with |
---|
10 | 10 | very low-overhead readers that are immune to deadlock, priority inversion, |
---|
11 | 11 | and unbounded latency. RCU read-side critical sections are delimited |
---|
12 | | -by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPTION |
---|
| 12 | +by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT |
---|
13 | 13 | kernels, generate no code whatsoever. |
---|
14 | 14 | |
---|
15 | 15 | This means that RCU writers are unaware of the presence of concurrent |
---|
.. | .. |
---|
329 | 329 | to smp_call_function() and further to smp_call_function_on_cpu(), |
---|
330 | 330 | causing this latter to spin until the cross-CPU invocation of |
---|
331 | 331 | rcu_barrier_func() has completed. This by itself would prevent |
---|
332 | | - a grace period from completing on non-CONFIG_PREEMPTION kernels, |
---|
| 332 | + a grace period from completing on non-CONFIG_PREEMPT kernels, |
---|
333 | 333 | since each CPU must undergo a context switch (or other quiescent |
---|
334 | 334 | state) before the grace period can complete. However, this is |
---|
335 | | - of no use in CONFIG_PREEMPTION kernels. |
---|
| 335 | + of no use in CONFIG_PREEMPT kernels. |
---|
336 | 336 | |
---|
337 | 337 | Therefore, on_each_cpu() disables preemption across its call |
---|
338 | 338 | to smp_call_function() and also across the local call to |
---|