diff options
author | Will Deacon <will.deacon@arm.com> | 2012-07-27 12:31:35 +0100 |
---|---|---|
committer | Will Deacon <will.deacon@arm.com> | 2012-11-05 16:25:25 +0000 |
commit | 4b883160835faf38c9356f0885cf491a1e661e88 (patch) | |
tree | ca76045d4d5d33d5282e04c0c0926a85945ec8ed /net/tipc/handler.c | |
parent | b5466f8728527a05a493cc4abe9e6f034a1bbaab (diff) |
ARM: mm: avoid taking ASID spinlock on fastpath
When scheduling a new mm, we take a spinlock so that we can:
1. Safely allocate a new ASID, if required
2. Update our active_asids field without worrying about parallel
updates to reserved_asids
3. Ensure that we flush our local TLB, if required
However, this has the nasty affect of serialising context-switch across
all CPUs in the system. The usual (fast) case is where the next mm has
a valid ASID for the current generation. In such a scenario, we can
avoid taking the lock and instead use atomic64_xchg to update the
active_asids variable for the current CPU. If a rollover occurs on
another CPU (which would take the lock), when copying the active_asids
into the reserved_asids another atomic64_xchg is used to replace each
active_asids with 0. The fast path can then detect this case and fall
back to spinning on the lock.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'net/tipc/handler.c')
0 files changed, 0 insertions, 0 deletions