diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2012-12-19 07:52:48 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-12-19 07:52:48 -0800 |
commit | 7f2de8171ddf28fdb2ca7f9a683ee1207849f718 (patch) | |
tree | d89da981ac762de3fd32e1c08ddc8041f3c37519 /arch/Kconfig | |
parent | 59771079c18c44e39106f0f30054025acafadb41 (diff) | |
parent | cf66bb93e0f75e0a4ba1ec070692618fa028e994 (diff) |
Merge tag 'byteswap-for-linus-20121219' of git://git.infradead.org/users/dwmw2/byteswap
Pull preparatory gcc intrisics bswap patch from David Woodhouse:
"This single patch is effectively a no-op for now. It enables
architectures to opt in to using GCC's __builtin_bswapXX() intrinsics
for byteswapping, and if we merge this now then the architecture
maintainers can enable it for their arch during the next cycle without
dependency issues.
It's worth making it a par-arch opt-in, because although in *theory*
the compiler should never do worse than hand-coded assembler (and of
course it also ought to do a lot better on platforms like Atom and
PowerPC which have load-and-swap or store-and-swap instructions), that
isn't always the case. See
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46453
for example."
* tag 'byteswap-for-linus-20121219' of git://git.infradead.org/users/dwmw2/byteswap:
byteorder: allow arch to opt to use GCC intrinsics for byteswapping
Diffstat (limited to 'arch/Kconfig')
-rw-r--r-- | arch/Kconfig | 19 |
1 files changed, 19 insertions, 0 deletions
diff --git a/arch/Kconfig b/arch/Kconfig index 54ffd0f9df2..8e9e3246b2b 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -113,6 +113,25 @@ config HAVE_EFFICIENT_UNALIGNED_ACCESS See Documentation/unaligned-memory-access.txt for more information on the topic of unaligned memory accesses. +config ARCH_USE_BUILTIN_BSWAP + bool + help + Modern versions of GCC (since 4.4) have builtin functions + for handling byte-swapping. Using these, instead of the old + inline assembler that the architecture code provides in the + __arch_bswapXX() macros, allows the compiler to see what's + happening and offers more opportunity for optimisation. In + particular, the compiler will be able to combine the byteswap + with a nearby load or store and use load-and-swap or + store-and-swap instructions if the architecture has them. It + should almost *never* result in code which is worse than the + hand-coded assembler in <asm/swab.h>. But just in case it + does, the use of the builtins is optional. + + Any architecture with load-and-swap or store-and-swap + instructions should set this. And it shouldn't hurt to set it + on architectures that don't have such instructions. + config HAVE_SYSCALL_WRAPPERS bool |