diff options
author | Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> | 2010-10-06 08:36:59 +0000 |
---|---|---|
committer | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2010-11-29 15:48:19 +1100 |
commit | 99d86705253dcf728dbbec4d694a6764b6edb70c (patch) | |
tree | e4c68adab6448463a77141c1797671417f58242c /arch/powerpc/mm/mmu_context_nohash.c | |
parent | 787d44caa5bca249d8781d21b626c417f1e3cfc4 (diff) |
powerpc: Cleanup APIs for cpu/thread/core mappings
These APIs take logical cpu number as input
Change cpu_first_thread_in_core() to cpu_first_thread_sibling()
Change cpu_last_thread_in_core() to cpu_last_thread_sibling()
These APIs convert core number (index) to logical cpu/thread numbers
Add cpu_first_thread_of_core(int core)
Changed cpu_thread_to_core() to cpu_core_index_of_thread(int cpu)
The goal is to make 'threads_per_core' accessible to the
pseries_energy module. Instead of making an API to read
threads_per_core, this is a higher level wrapper function to
convert from logical cpu number to core number.
The current APIs cpu_first_thread_in_core() and
cpu_last_thread_in_core() returns logical CPU number while
cpu_thread_to_core() returns core number or index which is
not a logical CPU number. The new APIs are now clearly named to
distinguish 'core number' versus first and last 'logical cpu
number' in that core.
The new APIs cpu_{first,last}_thread_sibling() work on
logical cpu numbers. While cpu_first_thread_of_core() and
cpu_core_index_of_thread() work on core index.
Example usage: (4 threads per core system)
cpu_first_thread_sibling(5) = 4
cpu_last_thread_sibling(5) = 7
cpu_core_index_of_thread(5) = 1
cpu_first_thread_of_core(1) = 4
cpu_core_index_of_thread() is used in cpu_to_drc_index() in the
module and cpu_first_thread_of_core() is used in
drc_index_to_cpu() in the module.
Make API changes to few callers. Export symbols for use in modules.
Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Diffstat (limited to 'arch/powerpc/mm/mmu_context_nohash.c')
-rw-r--r-- | arch/powerpc/mm/mmu_context_nohash.c | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c index 5ce99848d91..c0aab52da3a 100644 --- a/arch/powerpc/mm/mmu_context_nohash.c +++ b/arch/powerpc/mm/mmu_context_nohash.c @@ -111,8 +111,8 @@ static unsigned int steal_context_smp(unsigned int id) * a core map instead but this will do for now. */ for_each_cpu(cpu, mm_cpumask(mm)) { - for (i = cpu_first_thread_in_core(cpu); - i <= cpu_last_thread_in_core(cpu); i++) + for (i = cpu_first_thread_sibling(cpu); + i <= cpu_last_thread_sibling(cpu); i++) __set_bit(id, stale_map[i]); cpu = i - 1; } @@ -264,14 +264,14 @@ void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next) */ if (test_bit(id, stale_map[cpu])) { pr_hardcont(" | stale flush %d [%d..%d]", - id, cpu_first_thread_in_core(cpu), - cpu_last_thread_in_core(cpu)); + id, cpu_first_thread_sibling(cpu), + cpu_last_thread_sibling(cpu)); local_flush_tlb_mm(next); /* XXX This clear should ultimately be part of local_flush_tlb_mm */ - for (i = cpu_first_thread_in_core(cpu); - i <= cpu_last_thread_in_core(cpu); i++) { + for (i = cpu_first_thread_sibling(cpu); + i <= cpu_last_thread_sibling(cpu); i++) { __clear_bit(id, stale_map[i]); } } |