summaryrefslogtreecommitdiffstats
path: root/Documentation/RCU
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/RCU')
-rw-r--r--Documentation/RCU/00-INDEX10
-rw-r--r--Documentation/RCU/NMI-RCU.txt39
-rw-r--r--Documentation/RCU/RTFP.txt63
-rw-r--r--Documentation/RCU/checklist.txt215
-rw-r--r--Documentation/RCU/lockdep.txt91
-rw-r--r--Documentation/RCU/rcu.txt48
-rw-r--r--Documentation/RCU/stallwarn.txt106
-rw-r--r--Documentation/RCU/torture.txt22
-rw-r--r--Documentation/RCU/trace.txt35
-rw-r--r--Documentation/RCU/whatisRCU.txt22
10 files changed, 465 insertions, 186 deletions
diff --git a/Documentation/RCU/00-INDEX b/Documentation/RCU/00-INDEX
index 9bb62f7b89c..71b6f500ddb 100644
--- a/Documentation/RCU/00-INDEX
+++ b/Documentation/RCU/00-INDEX
@@ -6,16 +6,22 @@ checklist.txt
- Review Checklist for RCU Patches
listRCU.txt
- Using RCU to Protect Read-Mostly Linked Lists
+lockdep.txt
+ - RCU and lockdep checking
NMI-RCU.txt
- Using RCU to Protect Dynamic NMI Handlers
+rcubarrier.txt
+ - RCU and Unloadable Modules
+rculist_nulls.txt
+ - RCU list primitives for use with SLAB_DESTROY_BY_RCU
rcuref.txt
- Reference-count design for elements of lists/arrays protected by RCU
rcu.txt
- RCU Concepts
-rcubarrier.txt
- - Unloading modules that use RCU callbacks
RTFP.txt
- List of RCU papers (bibliography) going back to 1980.
+stallwarn.txt
+ - RCU CPU stall warnings (CONFIG_RCU_CPU_STALL_DETECTOR)
torture.txt
- RCU Torture Test Operation (CONFIG_RCU_TORTURE_TEST)
trace.txt
diff --git a/Documentation/RCU/NMI-RCU.txt b/Documentation/RCU/NMI-RCU.txt
index a6d32e65d22..a8536cb8809 100644
--- a/Documentation/RCU/NMI-RCU.txt
+++ b/Documentation/RCU/NMI-RCU.txt
@@ -34,7 +34,7 @@ NMI handler.
cpu = smp_processor_id();
++nmi_count(cpu);
- if (!rcu_dereference(nmi_callback)(regs, cpu))
+ if (!rcu_dereference_sched(nmi_callback)(regs, cpu))
default_do_nmi(regs);
nmi_exit();
@@ -47,12 +47,13 @@ function pointer. If this handler returns zero, do_nmi() invokes the
default_do_nmi() function to handle a machine-specific NMI. Finally,
preemption is restored.
-Strictly speaking, rcu_dereference() is not needed, since this code runs
-only on i386, which does not need rcu_dereference() anyway. However,
-it is a good documentation aid, particularly for anyone attempting to
-do something similar on Alpha.
+In theory, rcu_dereference_sched() is not needed, since this code runs
+only on i386, which in theory does not need rcu_dereference_sched()
+anyway. However, in practice it is a good documentation aid, particularly
+for anyone attempting to do something similar on Alpha or on systems
+with aggressive optimizing compilers.
-Quick Quiz: Why might the rcu_dereference() be necessary on Alpha,
+Quick Quiz: Why might the rcu_dereference_sched() be necessary on Alpha,
given that the code referenced by the pointer is read-only?
@@ -99,17 +100,21 @@ invoke irq_enter() and irq_exit() on NMI entry and exit, respectively.
Answer to Quick Quiz
- Why might the rcu_dereference() be necessary on Alpha, given
+ Why might the rcu_dereference_sched() be necessary on Alpha, given
that the code referenced by the pointer is read-only?
Answer: The caller to set_nmi_callback() might well have
- initialized some data that is to be used by the
- new NMI handler. In this case, the rcu_dereference()
- would be needed, because otherwise a CPU that received
- an NMI just after the new handler was set might see
- the pointer to the new NMI handler, but the old
- pre-initialized version of the handler's data.
-
- More important, the rcu_dereference() makes it clear
- to someone reading the code that the pointer is being
- protected by RCU.
+ initialized some data that is to be used by the new NMI
+ handler. In this case, the rcu_dereference_sched() would
+ be needed, because otherwise a CPU that received an NMI
+ just after the new handler was set might see the pointer
+ to the new NMI handler, but the old pre-initialized
+ version of the handler's data.
+
+ This same sad story can happen on other CPUs when using
+ a compiler with aggressive pointer-value speculation
+ optimizations.
+
+ More important, the rcu_dereference_sched() makes it
+ clear to someone reading the code that the pointer is
+ being protected by RCU-sched.
diff --git a/Documentation/RCU/RTFP.txt b/Documentation/RCU/RTFP.txt
index d2b85237c76..c43460dade0 100644
--- a/Documentation/RCU/RTFP.txt
+++ b/Documentation/RCU/RTFP.txt
@@ -25,10 +25,10 @@ to be referencing the data structure. However, this mechanism was not
optimized for modern computer systems, which is not surprising given
that these overheads were not so expensive in the mid-80s. Nonetheless,
passive serialization appears to be the first deferred-destruction
-mechanism to be used in production. Furthermore, the relevant patent has
-lapsed, so this approach may be used in non-GPL software, if desired.
-(In contrast, use of RCU is permitted only in software licensed under
-GPL. Sorry!!!)
+mechanism to be used in production. Furthermore, the relevant patent
+has lapsed, so this approach may be used in non-GPL software, if desired.
+(In contrast, implementation of RCU is permitted only in software licensed
+under either GPL or LGPL. Sorry!!!)
In 1990, Pugh [Pugh90] noted that explicitly tracking which threads
were reading a given data structure permitted deferred free to operate
@@ -150,6 +150,18 @@ preemptible RCU [PaulEMcKenney2007PreemptibleRCU], and the three-part
LWN "What is RCU?" series [PaulEMcKenney2007WhatIsRCUFundamentally,
PaulEMcKenney2008WhatIsRCUUsage, and PaulEMcKenney2008WhatIsRCUAPI].
+2008 saw a journal paper on real-time RCU [DinakarGuniguntala2008IBMSysJ],
+a history of how Linux changed RCU more than RCU changed Linux
+[PaulEMcKenney2008RCUOSR], and a design overview of hierarchical RCU
+[PaulEMcKenney2008HierarchicalRCU].
+
+2009 introduced user-level RCU algorithms [PaulEMcKenney2009MaliciousURCU],
+which Mathieu Desnoyers is now maintaining [MathieuDesnoyers2009URCU]
+[MathieuDesnoyersPhD]. TINY_RCU [PaulEMcKenney2009BloatWatchRCU] made
+its appearance, as did expedited RCU [PaulEMcKenney2009expeditedRCU].
+The problem of resizeable RCU-protected hash tables may now be on a path
+to a solution [JoshTriplett2009RPHash].
+
Bibtex Entries
@article{Kung80
@@ -594,7 +606,7 @@ Suparna Bhattacharya"
,Year="2006"
,pages="v2 123-138"
,note="Available:
-\url{http://www.linuxsymposium.org/2006/view_abstract.php?content_key=184}
+\url{http://www.linuxsymposium.org/2006/index_2006.php}
\url{http://www.rdrop.com/users/paulmck/RCU/OLSrtRCU.2006.08.11a.pdf}
[Viewed January 1, 2007]"
,annotation="
@@ -730,6 +742,11 @@ Revised:
"
}
+#
+# "What is RCU?" LWN series.
+#
+########################################################################
+
@article{DinakarGuniguntala2008IBMSysJ
,author="D. Guniguntala and P. E. McKenney and J. Triplett and J. Walpole"
,title="The read-copy-update mechanism for supporting real-time applications on shared-memory multiprocessor systems with {Linux}"
@@ -820,3 +837,39 @@ Revised:
Uniprocessor assumptions allow simplified RCU implementation.
"
}
+
+@unpublished{PaulEMcKenney2009expeditedRCU
+,Author="Paul E. McKenney"
+,Title="[{PATCH} -tip 0/3] expedited 'big hammer' {RCU} grace periods"
+,month="June"
+,day="25"
+,year="2009"
+,note="Available:
+\url{http://lkml.org/lkml/2009/6/25/306}
+[Viewed August 16, 2009]"
+,annotation="
+ First posting of expedited RCU to be accepted into -tip.
+"
+}
+
+@unpublished{JoshTriplett2009RPHash
+,Author="Josh Triplett"
+,Title="Scalable concurrent hash tables via relativistic programming"
+,month="September"
+,year="2009"
+,note="Linux Plumbers Conference presentation"
+,annotation="
+ RP fun with hash tables.
+"
+}
+
+@phdthesis{MathieuDesnoyersPhD
+, title = "Low-Impact Operating System Tracing"
+, author = "Mathieu Desnoyers"
+, school = "Ecole Polytechnique de Montr\'{e}al"
+, month = "December"
+, year = 2009
+,note="Available:
+\url{http://www.lttng.org/pub/thesis/desnoyers-dissertation-2009-12.pdf}
+[Viewed December 9, 2009]"
+}
diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt
index 51525a30e8b..790d1a81237 100644
--- a/Documentation/RCU/checklist.txt
+++ b/Documentation/RCU/checklist.txt
@@ -8,13 +8,12 @@ would cause. This list is based on experiences reviewing such patches
over a rather long period of time, but improvements are always welcome!
0. Is RCU being applied to a read-mostly situation? If the data
- structure is updated more than about 10% of the time, then
- you should strongly consider some other approach, unless
- detailed performance measurements show that RCU is nonetheless
- the right tool for the job. Yes, you might think of RCU
- as simply cutting overhead off of the readers and imposing it
- on the writers. That is exactly why normal uses of RCU will
- do much more reading than updating.
+ structure is updated more than about 10% of the time, then you
+ should strongly consider some other approach, unless detailed
+ performance measurements show that RCU is nonetheless the right
+ tool for the job. Yes, RCU does reduce read-side overhead by
+ increasing write-side overhead, which is exactly why normal uses
+ of RCU will do much more reading than updating.
Another exception is where performance is not an issue, and RCU
provides a simpler implementation. An example of this situation
@@ -35,13 +34,13 @@ over a rather long period of time, but improvements are always welcome!
If you choose #b, be prepared to describe how you have handled
memory barriers on weakly ordered machines (pretty much all of
- them -- even x86 allows reads to be reordered), and be prepared
- to explain why this added complexity is worthwhile. If you
- choose #c, be prepared to explain how this single task does not
- become a major bottleneck on big multiprocessor machines (for
- example, if the task is updating information relating to itself
- that other tasks can read, there by definition can be no
- bottleneck).
+ them -- even x86 allows later loads to be reordered to precede
+ earlier stores), and be prepared to explain why this added
+ complexity is worthwhile. If you choose #c, be prepared to
+ explain how this single task does not become a major bottleneck on
+ big multiprocessor machines (for example, if the task is updating
+ information relating to itself that other tasks can read, there
+ by definition can be no bottleneck).
2. Do the RCU read-side critical sections make proper use of
rcu_read_lock() and friends? These primitives are needed
@@ -51,8 +50,10 @@ over a rather long period of time, but improvements are always welcome!
actuarial risk of your kernel.
As a rough rule of thumb, any dereference of an RCU-protected
- pointer must be covered by rcu_read_lock() or rcu_read_lock_bh()
- or by the appropriate update-side lock.
+ pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(),
+ rcu_read_lock_sched(), or by the appropriate update-side lock.
+ Disabling of preemption can serve as rcu_read_lock_sched(), but
+ is less readable.
3. Does the update code tolerate concurrent accesses?
@@ -62,25 +63,27 @@ over a rather long period of time, but improvements are always welcome!
of ways to handle this concurrency, depending on the situation:
a. Use the RCU variants of the list and hlist update
- primitives to add, remove, and replace elements on an
- RCU-protected list. Alternatively, use the RCU-protected
- trees that have been added to the Linux kernel.
+ primitives to add, remove, and replace elements on
+ an RCU-protected list. Alternatively, use the other
+ RCU-protected data structures that have been added to
+ the Linux kernel.
This is almost always the best approach.
b. Proceed as in (a) above, but also maintain per-element
locks (that are acquired by both readers and writers)
that guard per-element state. Of course, fields that
- the readers refrain from accessing can be guarded by the
- update-side lock.
+ the readers refrain from accessing can be guarded by
+ some other lock acquired only by updaters, if desired.
This works quite well, also.
c. Make updates appear atomic to readers. For example,
- pointer updates to properly aligned fields will appear
- atomic, as will individual atomic primitives. Operations
- performed under a lock and sequences of multiple atomic
- primitives will -not- appear to be atomic.
+ pointer updates to properly aligned fields will
+ appear atomic, as will individual atomic primitives.
+ Sequences of perations performed under a lock will -not-
+ appear to be atomic to RCU readers, nor will sequences
+ of multiple atomic primitives.
This can work, but is starting to get a bit tricky.
@@ -98,9 +101,9 @@ over a rather long period of time, but improvements are always welcome!
a new structure containing updated values.
4. Weakly ordered CPUs pose special challenges. Almost all CPUs
- are weakly ordered -- even i386 CPUs allow reads to be reordered.
- RCU code must take all of the following measures to prevent
- memory-corruption problems:
+ are weakly ordered -- even x86 CPUs allow later loads to be
+ reordered to precede earlier stores. RCU code must take all of
+ the following measures to prevent memory-corruption problems:
a. Readers must maintain proper ordering of their memory
accesses. The rcu_dereference() primitive ensures that
@@ -113,14 +116,25 @@ over a rather long period of time, but improvements are always welcome!
The rcu_dereference() primitive is also an excellent
documentation aid, letting the person reading the code
know exactly which pointers are protected by RCU.
-
- The rcu_dereference() primitive is used by the various
- "_rcu()" list-traversal primitives, such as the
- list_for_each_entry_rcu(). Note that it is perfectly
- legal (if redundant) for update-side code to use
- rcu_dereference() and the "_rcu()" list-traversal
- primitives. This is particularly useful in code
- that is common to readers and updaters.
+ Please note that compilers can also reorder code, and
+ they are becoming increasingly aggressive about doing
+ just that. The rcu_dereference() primitive therefore
+ also prevents destructive compiler optimizations.
+
+ The rcu_dereference() primitive is used by the
+ various "_rcu()" list-traversal primitives, such
+ as the list_for_each_entry_rcu(). Note that it is
+ perfectly legal (if redundant) for update-side code to
+ use rcu_dereference() and the "_rcu()" list-traversal
+ primitives. This is particularly useful in code that
+ is common to readers and updaters. However, lockdep
+ will complain if you access rcu_dereference() outside
+ of an RCU read-side critical section. See lockdep.txt
+ to learn what to do about this.
+
+ Of course, neither rcu_dereference() nor the "_rcu()"
+ list-traversal primitives can substitute for a good
+ concurrency design coordinating among multiple updaters.
b. If the list macros are being used, the list_add_tail_rcu()
and list_add_rcu() primitives must be used in order
@@ -135,11 +149,14 @@ over a rather long period of time, but improvements are always welcome!
readers. Similarly, if the hlist macros are being used,
the hlist_del_rcu() primitive is required.
- The list_replace_rcu() primitive may be used to
- replace an old structure with a new one in an
- RCU-protected list.
+ The list_replace_rcu() and hlist_replace_rcu() primitives
+ may be used to replace an old structure with a new one
+ in their respective types of RCU-protected lists.
+
+ d. Rules similar to (4b) and (4c) apply to the "hlist_nulls"
+ type of RCU-protected linked lists.
- d. Updates must ensure that initialization of a given
+ e. Updates must ensure that initialization of a given
structure happens before pointers to that structure are
publicized. Use the rcu_assign_pointer() primitive
when publicizing a pointer to a structure that can
@@ -151,16 +168,31 @@ over a rather long period of time, but improvements are always welcome!
it cannot block.
6. Since synchronize_rcu() can block, it cannot be called from
- any sort of irq context. Ditto for synchronize_sched() and
- synchronize_srcu().
-
-7. If the updater uses call_rcu(), then the corresponding readers
- must use rcu_read_lock() and rcu_read_unlock(). If the updater
- uses call_rcu_bh(), then the corresponding readers must use
- rcu_read_lock_bh() and rcu_read_unlock_bh(). If the updater
- uses call_rcu_sched(), then the corresponding readers must
- disable preemption. Mixing things up will result in confusion
- and broken kernels.
+ any sort of irq context. The same rule applies for
+ synchronize_rcu_bh(), synchronize_sched(), synchronize_srcu(),
+ synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(),
+ synchronize_sched_expedite(), and synchronize_srcu_expedited().
+
+ The expedited forms of these primitives have the same semantics
+ as the non-expedited forms, but expediting is both expensive
+ and unfriendly to real-time workloads. Use of the expedited
+ primitives should be restricted to rare configuration-change
+ operations that would not normally be undertaken while a real-time
+ workload is running.
+
+7. If the updater uses call_rcu() or synchronize_rcu(), then the
+ corresponding readers must use rcu_read_lock() and
+ rcu_read_unlock(). If the updater uses call_rcu_bh() or
+ synchronize_rcu_bh(), then the corresponding readers must
+ use rcu_read_lock_bh() and rcu_read_unlock_bh(). If the
+ updater uses call_rcu_sched() or synchronize_sched(), then
+ the corresponding readers must disable preemption, possibly
+ by calling rcu_read_lock_sched() and rcu_read_unlock_sched().
+ If the updater uses synchronize_srcu(), the the corresponding
+ readers must use srcu_read_lock() and srcu_read_unlock(),
+ and with the same srcu_struct. The rules for the expedited
+ primitives are the same as for their non-expedited counterparts.
+ Mixing things up will result in confusion and broken kernels.
One exception to this rule: rcu_read_lock() and rcu_read_unlock()
may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh()
@@ -212,6 +244,8 @@ over a rather long period of time, but improvements are always welcome!
e. Periodically invoke synchronize_rcu(), permitting a limited
number of updates per grace period.
+ The same cautions apply to call_rcu_bh() and call_rcu_sched().
+
9. All RCU list-traversal primitives, which include
rcu_dereference(), list_for_each_entry_rcu(),
list_for_each_continue_rcu(), and list_for_each_safe_rcu(),
@@ -219,17 +253,21 @@ over a rather long period of time, but improvements are always welcome!
must be protected by appropriate update-side locks. RCU
read-side critical sections are delimited by rcu_read_lock()
and rcu_read_unlock(), or by similar primitives such as
- rcu_read_lock_bh() and rcu_read_unlock_bh().
+ rcu_read_lock_bh() and rcu_read_unlock_bh(), in which case
+ the matching rcu_dereference() primitive must be used in order
+ to keep lockdep happy, in this case, rcu_dereference_bh().
The reason that it is permissible to use RCU list-traversal
primitives when the update-side lock is held is that doing so
can be quite helpful in reducing code bloat when common code is
- shared between readers and updaters.
+ shared between readers and updaters. Additional primitives
+ are provided for this case, as discussed in lockdep.txt.
10. Conversely, if you are in an RCU read-side critical section,
and you don't hold the appropriate update-side lock, you -must-
use the "_rcu()" variants of the list macros. Failing to do so
- will break Alpha and confuse people reading your code.
+ will break Alpha, cause aggressive compilers to generate bad code,
+ and confuse people trying to read your code.
11. Note that synchronize_rcu() -only- guarantees to wait until
all currently executing rcu_read_lock()-protected RCU read-side
@@ -239,15 +277,21 @@ over a rather long period of time, but improvements are always welcome!
rcu_read_lock()-protected read-side critical sections, do -not-
use synchronize_rcu().
- If you want to wait for some of these other things, you might
- instead need to use synchronize_irq() or synchronize_sched().
+ Similarly, disabling preemption is not an acceptable substitute
+ for rcu_read_lock(). Code that attempts to use preemption
+ disabling where it should be using rcu_read_lock() will break
+ in real-time kernel builds.
+
+ If you want to wait for interrupt handlers, NMI handlers, and
+ code under the influence of preempt_disable(), you instead
+ need to use synchronize_irq() or synchronize_sched().
12. Any lock acquired by an RCU callback must be acquired elsewhere
with softirq disabled, e.g., via spin_lock_irqsave(),
spin_lock_bh(), etc. Failing to disable irq on a given
- acquisition of that lock will result in deadlock as soon as the
- RCU callback happens to interrupt that acquisition's critical
- section.
+ acquisition of that lock will result in deadlock as soon as
+ the RCU softirq handler happens to run your RCU callback while
+ interrupting that acquisition's critical section.
13. RCU callbacks can be and are executed in parallel. In many cases,
the callback code simply wrappers around kfree(), so that this
@@ -265,29 +309,30 @@ over a rather long period of time, but improvements are always welcome!
not the case, a self-spawning RCU callback would prevent the
victim CPU from ever going offline.)
-14. SRCU (srcu_read_lock(), srcu_read_unlock(), and synchronize_srcu())
- may only be invoked from process context. Unlike other forms of
- RCU, it -is- permissible to block in an SRCU read-side critical
- section (demarked by srcu_read_lock() and srcu_read_unlock()),
- hence the "SRCU": "sleepable RCU". Please note that if you
- don't need to sleep in read-side critical sections, you should
- be using RCU rather than SRCU, because RCU is almost always
- faster and easier to use than is SRCU.
+14. SRCU (srcu_read_lock(), srcu_read_unlock(), srcu_dereference(),
+ synchronize_srcu(), and synchronize_srcu_expedited()) may only
+ be invoked from process context. Unlike other forms of RCU, it
+ -is- permissible to block in an SRCU read-side critical section
+ (demarked by srcu_read_lock() and srcu_read_unlock()), hence the
+ "SRCU": "sleepable RCU". Please note that if you don't need
+ to sleep in read-side critical sections, you should be using
+ RCU rather than SRCU, because RCU is almost always faster and
+ easier to use than is SRCU.
Also unlike other forms of RCU, explicit initialization
and cleanup is required via init_srcu_struct() and
cleanup_srcu_struct(). These are passed a "struct srcu_struct"
that defines the scope of a given SRCU domain. Once initialized,
the srcu_struct is passed to srcu_read_lock(), srcu_read_unlock()
- and synchronize_srcu(). A given synchronize_srcu() waits only
- for SRCU read-side critical sections governed by srcu_read_lock()
- and srcu_read_unlock() calls that have been passd the same
- srcu_struct. This property is what makes sleeping read-side
- critical sections tolerable -- a given subsystem delays only
- its own updates, not those of other subsystems using SRCU.
- Therefore, SRCU is less prone to OOM the system than RCU would
- be if RCU's read-side critical sections were permitted to
- sleep.
+ synchronize_srcu(), and synchronize_srcu_expedited(). A given
+ synchronize_srcu() waits only for SRCU read-side critical
+ sections governed by srcu_read_lock() and srcu_read_unlock()
+ calls that have been passed the same srcu_struct. This property
+ is what makes sleeping read-side critical sections tolerable --
+ a given subsystem delays only its own updates, not those of other
+ subsystems using SRCU. Therefore, SRCU is less prone to OOM the
+ system than RCU would be if RCU's read-side critical sections
+ were permitted to sleep.
The ability to sleep in read-side critical sections does not
come for free. First, corresponding srcu_read_lock() and
@@ -300,8 +345,8 @@ over a rather long period of time, but improvements are always welcome!
requiring SRCU's read-side deadlock immunity or low read-side
realtime latency.
- Note that, rcu_assign_pointer() and rcu_dereference() relate to
- SRCU just as they do to other forms of RCU.
+ Note that, rcu_assign_pointer() relates to SRCU just as they do
+ to other forms of RCU.
15. The whole point of call_rcu(), synchronize_rcu(), and friends
is to wait until all pre-existing readers have finished before
@@ -311,12 +356,12 @@ over a rather long period of time, but improvements are always welcome!
destructive operation, and -only- -then- invoke call_rcu(),
synchronize_rcu(), or friends.
- Because these primitives only wait for pre-existing readers,
- it is the caller's responsibility to guarantee safety to
- any subsequent readers.
+ Because these primitives only wait for pre-existing readers, it
+ is the caller's responsibility to guarantee that any subsequent
+ readers will execute safely.
-16. The various RCU read-side primitives do -not- contain memory
- barriers. The CPU (and in some cases, the compiler) is free
- to reorder code into and out of RCU read-side critical sections.
- It is the responsibility of the RCU update-side primitives to
- deal with this.
+16. The various RCU read-side primitives do -not- necessarily contain
+ memory barriers. You should therefore plan for the CPU
+ and the compiler to freely reorder code into and out of RCU
+ read-side critical sections. It is the responsibility of the
+ RCU update-side primitives to deal with this.
diff --git a/Documentation/RCU/lockdep.txt b/Documentation/RCU/lockdep.txt
new file mode 100644
index 00000000000..d7a49b2f699
--- /dev/null
+++ b/Documentation/RCU/lockdep.txt
@@ -0,0 +1,91 @@
+RCU and lockdep checking
+
+All flavors of RCU have lockdep checking available, so that lockdep is
+aware of when each task enters and leaves any flavor of RCU read-side
+critical section. Each flavor of RCU is tracked separately (but note
+that this is not the case in 2.6.32 and earlier). This allows lockdep's
+tracking to include RCU state, which can sometimes help when debugging
+deadlocks and the like.
+
+In addition, RCU provides the following primitives that check lockdep's
+state:
+
+ rcu_read_lock_held() for normal RCU.
+ rcu_read_lock_bh_held() for RCU-bh.
+ rcu_read_lock_sched_held() for RCU-sched.
+ srcu_read_lock_held() for SRCU.
+
+These functions are conservative, and will therefore return 1 if they
+aren't certain (for example, if CONFIG_DEBUG_LOCK_ALLOC is not set).
+This prevents things like WARN_ON(!rcu_read_lock_held()) from giving false
+positives when lockdep is disabled.
+
+In addition, a separate kernel config parameter CONFIG_PROVE_RCU enables
+checking of rcu_dereference() primitives:
+
+ rcu_dereference(p):
+ Check for RCU read-side critical section.
+ rcu_dereference_bh(p):
+ Check for RCU-bh read-side critical section.
+ rcu_dereference_sched(p):
+ Check for RCU-sched read-side critical section.
+ srcu_dereference(p, sp):
+ Check for SRCU read-side critical section.
+ rcu_dereference_check(p, c):
+ Use explicit check expression "c". This is useful in
+ code that is invoked by both readers and updaters.
+ rcu_dereference_raw(p)
+ Don't check. (Use sparingly, if at all.)
+ rcu_dereference_protected(p, c):
+ Use explicit check expression "c", and omit all barriers
+ and compiler constraints. This is useful when the data
+ structure cannot change, for example, in code that is
+ invoked only by updaters.
+ rcu_access_pointer(p):
+ Return the value of the pointer and omit all barriers,
+ but retain the compiler constraints that prevent duplicating
+ or coalescsing. This is useful when when testing the
+ value of the pointer itself, for example, against NULL.
+
+The rcu_dereference_check() check expression can be any boolean
+expression, but would normally include one of the rcu_read_lock_held()
+family of functions and a lockdep expression. However, any boolean
+expression can be used. For a moderately ornate example, consider
+the following:
+
+ file = rcu_dereference_check(fdt->fd[fd],
+ rcu_read_lock_held() ||
+ lockdep_is_held(&files->file_lock) ||
+ atomic_read(&files->count) == 1);
+
+This expression picks up the pointer "fdt->fd[fd]" in an RCU-safe manner,
+and, if CONFIG_PROVE_RCU is configured, verifies that this expression
+is used in:
+
+1. An RCU read-side critical section, or
+2. with files->file_lock held, or
+3. on an unshared files_struct.
+
+In case (1), the pointer is picked up in an RCU-safe manner for vanilla
+RCU read-side critical sections, in case (2) the ->file_lock prevents
+any change from taking place, and finally, in case (3) the current task
+is the only task accessing the file_struct, again preventing any change
+from taking place. If the above statement was invoked only from updater
+code, it could instead be written as follows:
+
+ file = rcu_dereference_protected(fdt->fd[fd],
+ lockdep_is_held(&files->file_lock) ||
+ atomic_read(&files->count) == 1);
+
+This would verify cases #2 and #3 above, and furthermore lockdep would
+complain if this was used in an RCU read-side critical section unless one
+of these two cases held. Because rcu_dereference_protected() omits all
+barriers and compiler constraints, it generates better code than do the
+other flavors of rcu_dereference(). On the other hand, it is illegal
+to use rcu_dereference_protected() if either the RCU-protected pointer
+or the RCU-protected data that it points to can change concurrently.
+
+There are currently only "universal" versions of the rcu_assign_pointer()
+and RCU list-/tree-traversal primitives, which do not (yet) check for
+being in an RCU read-side critical section. In the future, separate
+versions of these primitives might be created.
diff --git a/Documentation/RCU/rcu.txt b/Documentation/RCU/rcu.txt
index 2a23523ce47..31852705b58 100644
--- a/Documentation/RCU/rcu.txt
+++ b/Documentation/RCU/rcu.txt
@@ -75,6 +75,8 @@ o I hear that RCU is patented? What is with that?
search for the string "Patent" in RTFP.txt to find them.
Of these, one was allowed to lapse by the assignee, and the
others have been contributed to the Linux kernel under GPL.
+ There are now also LGPL implementations of user-level RCU
+ available (http://lttng.org/?q=node/18).
o I hear that RCU needs work in order to support realtime kernels?
@@ -91,48 +93,4 @@ o Where can I find more information on RCU?
o What are all these files in this directory?
-
- NMI-RCU.txt
-
- Describes how to use RCU to implement dynamic
- NMI handlers, which can be revectored on the fly,
- without rebooting.
-
- RTFP.txt
-
- List of RCU-related publications and web sites.
-
- UP.txt
-
- Discussion of RCU usage in UP kernels.
-
- arrayRCU.txt
-
- Describes how to use RCU to protect arrays, with
- resizeable arrays whose elements reference other
- data structures being of the most interest.
-
- checklist.txt
-
- Lists things to check for when inspecting code that
- uses RCU.
-
- listRCU.txt
-
- Describes how to use RCU to protect linked lists.
- This is the simplest and most common use of RCU
- in the Linux kernel.
-
- rcu.txt
-
- You are reading it!
-
- rcuref.txt
-
- Describes how to combine use of reference counts
- with RCU.
-
- whatisRCU.txt
-
- Overview of how the RCU implementation works. Along
- the way, presents a conceptual view of RCU.
+ See 00-INDEX for the list.
diff --git a/Documentation/RCU/stallwarn.txt b/Documentation/RCU/stallwarn.txt
new file mode 100644
index 00000000000..44c6dcc93d6
--- /dev/null
+++ b/Documentation/RCU/stallwarn.txt
@@ -0,0 +1,106 @@
+Using RCU's CPU Stall Detector
+
+The CONFIG_RCU_CPU_STALL_DETECTOR kernel config parameter enables
+RCU's CPU stall detector, which detects conditions that unduly delay
+RCU grace periods. The stall detector's idea of what constitutes
+"unduly delayed" is controlled by a set of C preprocessor macros:
+
+RCU_SECONDS_TILL_STALL_CHECK
+
+ This macro defines the period of time that RCU will wait from
+ the beginning of a grace period until it issues an RCU CPU
+ stall warning. This time period is normally ten seconds.
+
+RCU_SECONDS_TILL_STALL_RECHECK
+
+ This macro defines the period of time that RCU will wait after
+ issuing a stall warning until it issues another stall warning
+ for the same stall. This time period is normally set to thirty
+ seconds.
+
+RCU_STALL_RAT_DELAY
+
+ The CPU stall detector tries to make the offending CPU print its
+ own warnings, as this often gives better-quality stack traces.
+ However, if the offending CPU does not detect its own stall in
+ the number of jiffies specified by RCU_STALL_RAT_DELAY, then
+ some other CPU will complain. This delay is normally set to
+ two jiffies.
+
+When a CPU detects that it is stalling, it will print a message similar
+to the following:
+
+INFO: rcu_sched_state detected stall on CPU 5 (t=2500 jiffies)
+
+This message indicates that CPU 5 detected that it was causing a stall,
+and that the stall was affecting RCU-sched. This message will normally be
+followed by a stack dump of the offending CPU. On TREE_RCU kernel builds,
+RCU and RCU-sched are implemented by the same underlying mechanism,
+while on TREE_PREEMPT_RCU kernel builds, RCU is instead implemented
+by rcu_preempt_state.
+
+On the other hand, if the offending CPU fails to print out a stall-warning
+message quickly enough, some other CPU will print a message similar to
+the following:
+
+INFO: rcu_bh_state detected stalls on CPUs/tasks: { 3 5 } (detected by 2, 2502 jiffies)
+
+This message indicates that CPU 2 detected that CPUs 3 and 5 were both
+causing stalls, and that the stall was affecting RCU-bh. This message
+will normally be followed by stack dumps for each CPU. Please note that
+TREE_PREEMPT_RCU builds can be stalled by tasks as well as by CPUs,
+and that the tasks will be indicated by PID, for example, "P3421".
+It is even possible for a rcu_preempt_state stall to be caused by both
+CPUs -and- tasks, in which case the offending CPUs and tasks will all
+be called out in the list.
+
+Finally, if the grace period ends just as the stall warning starts
+printing, there will be a spurious stall-warning message:
+
+INFO: rcu_bh_state detected stalls on CPUs/tasks: { } (detected by 4, 2502 jiffies)
+
+This is rare, but does happen from time to time in real life.
+
+So your kernel printed an RCU CPU stall warning. The next question is
+"What caused it?" The following problems can result in RCU CPU stall
+warnings:
+
+o A CPU looping in an RCU read-side critical section.
+
+o A CPU looping with interrupts disabled. This condition can
+ result in RCU-sched and RCU-bh stalls.
+
+o A CPU looping with preemption disabled. This condition can
+ result in RCU-sched stalls and, if ksoftirqd is in use, RCU-bh
+ stalls.
+
+o A CPU looping with bottom halves disabled. This condition can
+ result in RCU-sched and RCU-bh stalls.
+
+o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel
+ without invoking schedule().
+
+o A bug in the RCU implementation.
+
+o A hardware failure. This is quite unlikely, but has occurred
+ at least once in real life. A CPU failed in a running system,
+ becoming unresponsive, but not causing an immediate crash.
+ This resulted in a series of RCU CPU stall warnings, eventually
+ leading the realization that the CPU had failed.
+
+The RCU, RCU-sched, and RCU-bh implementations have CPU stall
+warning. SRCU does not have its own CPU stall warnings, but its
+calls to synchronize_sched() will result in RCU-sched detecting
+RCU-sched-related CPU stalls. Please note that RCU only detects
+CPU stalls when there is a grace period in progress. No grace period,
+no CPU stall warnings.
+
+To diagnose the cause of the stall, inspect the stack traces.
+The offending function will usually be near the top of the stack.
+If you have a series of stall warnings from a single extended stall,
+comparing the stack traces can often help determine where the stall
+is occurring, which will usually be in the function nearest the top of
+that portion of the stack which remains the same from trace to trace.
+If you can reliably trigger the stall, ftrace can be quite helpful.
+
+RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE.
diff --git a/Documentation/RCU/torture.txt b/Documentation/RCU/torture.txt
index 9dba3bb90e6..5d9016795fd 100644
--- a/Documentation/RCU/torture.txt
+++ b/Documentation/RCU/torture.txt
@@ -30,6 +30,18 @@ MODULE PARAMETERS
This module has the following parameters:
+fqs_duration Duration (in microseconds) of artificially induced bursts
+ of force_quiescent_state() invocations. In RCU
+ implementations having force_quiescent_state(), these
+ bursts help force races between forcing a given grace
+ period and that grace period ending on its own.
+
+fqs_holdoff Holdoff time (in microseconds) between consecutive calls
+ to force_quiescent_state() within a burst.
+
+fqs_stutter Wait time (in seconds) between consecutive bursts
+ of calls to force_quiescent_state().
+
irqreaders Says to invoke RCU readers from irq level. This is currently
done via timers. Defaults to "1" for variants of RCU that
permit this. (Or, more accurately, variants of RCU that do
@@ -170,16 +182,6 @@ Similarly, sched_expedited RCU provides the following:
sched_expedited-torture: Reader Pipe: 12660320201 95875 0 0 0 0 0 0 0 0 0
sched_expedited-torture: Reader Batch: 12660424885 0 0 0 0 0 0 0 0 0 0
sched_expedited-torture: Free-Block Circulation: 1090795 1090795 1090794 1090793 1090792 1090791 1090790 1090789 1090788 1090787 0
- state: -1 / 0:0 3:0 4:0
-
-As before, the first four lines are similar to those for RCU.
-The last line shows the task-migration state. The first number is
--1 if synchronize_sched_expedited() is idle, -2 if in the process of
-posting wakeups to the migration kthreads, and N when waiting on CPU N.
-Each of the colon-separated fields following the "/" is a CPU:state pair.
-Valid states are "0" for idle, "1" for waiting for quiescent state,
-"2" for passed through quiescent state, and "3" when a race with a
-CPU-hotplug event forces use of the synchronize_sched() primitive.
USAGE
diff --git a/Documentation/RCU/trace.txt b/Documentation/RCU/trace.txt
index 8608fd85e92..efd8cc95c06 100644
--- a/Documentation/RCU/trace.txt
+++ b/Documentation/RCU/trace.txt
@@ -256,23 +256,23 @@ o Each element of the form "1/1 0:127 ^0" represents one struct
The output of "cat rcu/rcu_pending" looks as follows:
rcu_sched:
- 0 np=255892 qsp=53936 cbr=0 cng=14417 gpc=10033 gps=24320 nf=6445 nn=146741
- 1 np=261224 qsp=54638 cbr=0 cng=25723 gpc=16310 gps=2849 nf=5912 nn=155792
- 2 np=237496 qsp=49664 cbr=0 cng=2762 gpc=45478 gps=1762 nf=1201 nn=136629
- 3 np=236249 qsp=48766 cbr=0 cng=286 gpc=48049 gps=1218 nf=207 nn=137723
- 4 np=221310 qsp=46850 cbr=0 cng=26 gpc=43161 gps=4634 nf=3529 nn=123110
- 5 np=237332 qsp=48449 cbr=0 cng=54 gpc=47920 gps=3252 nf=201 nn=137456
- 6 np=219995 qsp=46718 cbr=0 cng=50 gpc=42098 gps=6093 nf=4202 nn=120834
- 7 np=249893 qsp=49390 cbr=0 cng=72 gpc=38400 gps=17102 nf=41 nn=144888
+ 0 np=255892 qsp=53936 rpq=85 cbr=0 cng=14417 gpc=10033 gps=24320 nf=6445 nn=146741
+ 1 np=261224 qsp=54638 rpq=33 cbr=0 cng=25723 gpc=16310 gps=2849 nf=5912 nn=155792
+ 2 np=237496 qsp=49664 rpq=23 cbr=0 cng=2762 gpc=45478 gps=1762 nf=1201 nn=136629
+ 3 np=236249 qsp=48766 rpq=98 cbr=0 cng=286 gpc=48049 gps=1218 nf=207 nn=137723
+ 4 np=221310 qsp=46850 rpq=7 cbr=0 cng=26 gpc=43161 gps=4634 nf=3529 nn=123110
+ 5 np=237332 qsp=48449 rpq=9 cbr=0 cng=54 gpc=47920 gps=3252 nf=201 nn=137456
+ 6 np=219995 qsp=46718 rpq=12 cbr=0 cng=50 gpc=42098 gps=6093 nf=4202 nn=120834
+ 7 np=249893 qsp=49390 rpq=42 cbr=0 cng=72 gpc=38400 gps=17102 nf=41 nn=144888
rcu_bh:
- 0 np=146741 qsp=1419 cbr=0 cng=6 gpc=0 gps=0 nf=2 nn=145314
- 1 np=155792 qsp=12597 cbr=0 cng=0 gpc=4 gps=8 nf=3 nn=143180
- 2 np=136629 qsp=18680 cbr=0 cng=0 gpc=7 gps=6 nf=0 nn=117936
- 3 np=137723 qsp=2843 cbr=0 cng=0 gpc=10 gps=7 nf=0 nn=134863
- 4 np=123110 qsp=12433 cbr=0 cng=0 gpc=4 gps=2 nf=0 nn=110671
- 5 np=137456 qsp=4210 cbr=0 cng=0 gpc=6 gps=5 nf=0 nn=133235
- 6 np=120834 qsp=9902 cbr=0 cng=0 gpc=6 gps=3 nf=2 nn=110921
- 7 np=144888 qsp=26336 cbr=0 cng=0 gpc=8 gps=2 nf=0 nn=118542
+ 0 np=146741 qsp=1419 rpq=6 cbr=0 cng=6 gpc=0 gps=0 nf=2 nn=145314
+ 1 np=155792 qsp=12597 rpq=3 cbr=0 cng=0 gpc=4 gps=8 nf=3 nn=143180
+ 2 np=136629 qsp=18680 rpq=1 cbr=0 cng=0 gpc=7 gps=6 nf=0 nn=117936
+ 3 np=137723 qsp=2843 rpq=0 cbr=0 cng=0 gpc=10 gps=7 nf=0 nn=134863
+ 4 np=123110 qsp=12433 rpq=0 cbr=0 cng=0 gpc=4 gps=2 nf=0 nn=110671
+ 5 np=137456 qsp=4210 rpq=1 cbr=0 cng=0 gpc=6 gps=5 nf=0 nn=133235
+ 6 np=120834 qsp=9902 rpq=2 cbr=0 cng=0 gpc=6 gps=3 nf=2 nn=110921
+ 7 np=144888 qsp=26336 rpq=0 cbr=0 cng=0 gpc=8 gps=2 nf=0 nn=118542
As always, this is once again split into "rcu_sched" and "rcu_bh"
portions, with CONFIG_TREE_PREEMPT_RCU kernels having an additional
@@ -284,6 +284,9 @@ o "np" is the number of times that __rcu_pending() has been invoked
o "qsp" is the number of times that the RCU was waiting for a
quiescent state from this CPU.
+o "rpq" is the number of times that the CPU had passed through
+ a quiescent state, but not yet reported it to RCU.
+
o "cbr" is the number of times that this CPU had RCU callbacks
that had passed through a grace period, and were thus ready
to be invoked.
diff --git a/Documentation/RCU/whatisRCU.txt b/Documentation/RCU/whatisRCU.txt
index d542ca243b8..cfaac34c455 100644
--- a/Documentation/RCU/whatisRCU.txt
+++ b/Documentation/RCU/whatisRCU.txt
@@ -323,14 +323,17 @@ used as follows:
Defer Protect
a. synchronize_rcu() rcu_read_lock() / rcu_read_unlock()
- call_rcu()
+ call_rcu() rcu_dereference()
b. call_rcu_bh() rcu_read_lock_bh() / rcu_read_unlock_bh()
+ rcu_dereference_bh()
-c. synchronize_sched() preempt_disable() / preempt_enable()
+c. synchronize_sched() rcu_read_lock_sched() / rcu_read_unlock_sched()
+ preempt_disable() / preempt_enable()
local_irq_save() / local_irq_restore()
hardirq enter / hardirq exit
NMI enter / NMI exit
+ rcu_dereference_sched()
These three mechanisms are used as follows:
@@ -780,9 +783,8 @@ Linux-kernel source code, but it helps to have a full list of the
APIs, since there does not appear to be a way to categorize them
in docbook. Here is the list, by category.
-RCU pointer/list traversal:
+RCU list traversal:
- rcu_dereference
list_for_each_entry_rcu
hlist_for_each_entry_rcu
hlist_nulls_for_each_entry_rcu
@@ -808,7 +810,7 @@ RCU: Critical sections Grace period Barrier
rcu_read_lock synchronize_net rcu_barrier
rcu_read_unlock synchronize_rcu
- synchronize_rcu_expedited
+ rcu_dereference synchronize_rcu_expedited
call_rcu
@@ -816,7 +818,7 @@ bh: Critical sections Grace period Barrier
rcu_read_lock_bh call_rcu_bh rcu_barrier_bh
rcu_read_unlock_bh synchronize_rcu_bh
- synchronize_rcu_bh_expedited
+ rcu_dereference_bh synchronize_rcu_bh_expedited
sched: Critical sections Grace period Barrier
@@ -825,17 +827,25 @@ sched: Critical sections Grace period Barrier
rcu_read_unlock_sched call_rcu_sched
[preempt_disable] synchronize_sched_expedited
[and friends]
+ rcu_dereference_sched
SRCU: Critical sections Grace period Barrier
srcu_read_lock synchronize_srcu N/A
srcu_read_unlock synchronize_srcu_expedited
+ srcu_dereference
SRCU: Initialization/cleanup
init_srcu_struct
cleanup_srcu_struct
+All: lockdep-checked RCU-protected pointer access
+
+ rcu_dereference_check
+ rcu_dereference_protected
+ rcu_access_pointer
+
See the comment headers in the source code (or the docbook generated
from them) for more information.