Age | Commit message (Collapse) | Author |
|
Formatting changes in the files which have been changed in the course
of folding foo_skas functions into their callers. These include:
copyright updates
header file trimming
style fixes
adding severity to printks
These changes should be entirely non-functional.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This patch makes a number of simplifications enabled by the removal of
CHOOSE_MODE. There were lots of functions that looked like
int foo(args){
foo_skas(args);
}
The bodies of foo_skas are now folded into foo, and their declarations (and
sometimes entire header files) are deleted.
In addition, the union uml_pt_regs, which was a union between the tt and skas
register formats, is now a struct, with the tt-mode arm of the union being
removed.
It turns out that usr2_handler was unused, so it is gone.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The next stage after removing code which depends on CONFIG_MODE_TT is removing
the CHOOSE_MODE abstraction, which provided both compile-time and run-time
branching to either tt-mode or skas-mode code.
This patch removes choose-mode.h and all inclusions of it, and replaces all
CHOOSE_MODE invocations with the skas branch. This leaves a number of trivial
functions which will be dealt with in a later patch.
There are some changes in the uaccess and tls support which go somewhat beyond
this and eliminate some of the now-redundant functions.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This patch fixes a crash caused by an interrupt coming in when an IRQ stack
is being torn down. When this happens, handle_signal will loop, setting up
the IRQ stack again because the tearing down had finished, and handling
whatever signals had come in.
However, to_irq_stack returns a mask of pending signals to be handled, plus
bit zero is set if the IRQ stack was already active, and thus shouldn't be
torn down. This causes a problem because when handle_signal goes around
the loop, sig will be zero, and to_irq_stack will duly set bit zero in the
returned mask, faking handle_signal into believing that it shouldn't tear
down the IRQ stack and return thread_info pointers back to their original
values.
This will eventually cause a crash, as the IRQ stack thread_info will
continue pointing to the original task_struct and an interrupt will look
into it after it has been freed.
The fix is to stop passing a signal number into to_irq_stack. Rather, the
pending signals mask is initialized beforehand with the bit for sig already
set. References to sig in to_irq_stack can be replaced with references to
the mask.
[akpm@linux-foundation.org: use UL]
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Add a separate IRQ stack. This differs from i386 in having the entire
interrupt run on a separate stack rather than starting on the normal kernel
stack and switching over once some preparation has been done. The underlying
mechanism, is of course, sigaltstack.
Another difference is that interrupts that happen in userspace are handled on
the normal kernel stack. These cause a wait wakeup instead of a signal
delivery so there is no point in trying to switch stacks for these. There's
no other stuff on the stack, so there is no extra stack consumption.
This quirk makes it possible to have the entire interrupt run on a separate
stack - process preemption (and calls to schedule()) happens on a normal
kernel stack. If we enable CONFIG_PREEMPT, this will need to be rethought.
The IRQ stack for CPU 0 is declared in the same way as the initial kernel
stack. IRQ stacks for other CPUs will be allocated dynamically.
An extra field was added to the thread_info structure. When the active
thread_info is copied to the IRQ stack, the real_thread field points back to
the original stack. This makes it easy to tell where to copy the thread_info
struct back to when the interrupt is finished. It also serves as a marker of
a nested interrupt. It is NULL for the first interrupt on the stack, and
non-NULL for any nested interrupts.
Care is taken to behave correctly if a second interrupt comes in when the
thread_info structure is being set up or taken down. I could just disable
interrupts here, but I don't feel like giving up any of the performance gained
by not flipping signals on and off.
If an interrupt comes in during these critical periods, the handler can't run
because it has no idea what shape the stack is in. So, it sets a bit for its
signal in a global mask and returns. The outer handler will deal with this
signal itself.
Atomicity is had with xchg. A nested interrupt that needs to bail out will
xchg its signal mask into pending_mask and repeat in case yet another
interrupt hit at the same time, until the mask stabilizes.
The outermost interrupt will set up the thread_info and xchg a zero into
pending_mask when it is done. At this point, nested interrupts will look at
->real_thread and see that no setup needs to be done. They can just continue
normally.
Similar care needs to be taken when exiting the outer handler. If another
interrupt comes in while it is copying the thread_info, it will drop a bit
into pending_mask. The outer handler will check this and if it is non-zero,
will loop, set up the stack again, and handle the interrupt.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Some tidying of the irq code before introducing irq stacks. Mostly
style fixes, but the timer handler calls the timer code directly
rather than going through the generic sig_handler_common_skas.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
user_util.h isn't needed any more, so delete it and remove all includes of it.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
os_usr1_signal() is totally unused, os_usr1_process() is used only by TT mode.
Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Acked-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Fix a UML hang in which everything would just stop until some I/O happened
- a ping, someone whacking the keyboard - at which point everything would
start up again as though nothing had happened.
The cause was gcc reordering some code which absolutely needed to be
executed in the order in the source. When unblock_signals switches signals
from off to on, it needs to see if any interrupts had happened in the
critical section. The interrupt handlers check signals_enabled - if it is
zero, then the handler adds a bit to the "pending" bitmask and returns.
unblock_signals checks this mask to see if any signals need to be
delivered.
The crucial part is this:
signals_enabled = 1;
save_pending = pending;
if(save_pending == 0)
return;
pending = 0;
In order to avoid an interrupt arriving between reading pending and setting
it to zero, in which case, the record of the interrupt would be erased,
signals are enabled.
What happened was that gcc reordered this so that 'save_pending = pending'
came before 'signals_enabled = 1', creating a one-instruction window within
which an interrupt could arrive, set its bit in pending, and have it be
immediately erased.
When the I/O workload is purely disk-based, the loss of a block device
interrupt stops the entire I/O system because the next block request will
wait for the current one to finish. Thus the system hangs until something
else causes some I/O to arrive, such as a network packet or console input.
The fix to this particular problem is a memory barrier between enabling
signals and reading the pending signal mask. An xchg would also probably
work.
Looking over this code for similar problems led me to do a few more
things:
- make signals_enabled and pending volatile so that they don't get cached
in registers
- add an mb() to the return paths of block_signals and unblock_signals so
that the modification of signals_enabled doesn't get shuffled into the
caller in the event that these are inlined in the future.
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
timer_irq_inited was useless, so it is removed.
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Have most signals go through an arch-provided handler which recovers the
sigcontext and then calls a generic handler. This replaces the
ARCH_GET_SIGCONTEXT macro, which was somewhat fragile. On x86_64, recovering
%rdx (which holds the sigcontext pointer) must be the first thing that
happens. sig_handler duly invokes that first, but there is no guarantee that
I can see that instructions won't be reordered such that %rdx is used before
that. Having the arch provide the handler seems much more robust.
Some signals in some parts of UML require their own handlers - these places
don't call set_handler any more. They call sigaction or signal themselves.
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
This cleans up the mess that is the timer initialization. There used to be
two timer handlers - one that basically ran during delay loop calibration and
one that handled the timer afterwards. There were also two sets of timer
initialization code - one that starts in user code and calls into the kernel
side of the house, and one that starts in kernel code and calls user code.
This eliminates one timer handler and consolidates the two sets of
initialization code.
[akpm@osdl.org: use new INTF_ flags]
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
This patch implements soft interrupts. Interrupt enabling and disabling no
longer map to sigprocmask. Rather, a flag is set indicating whether
interrupts may be handled. If a signal comes in and interrupts are marked as
OK, then it is handled normally. If interrupts are marked as off, then the
signal handler simply returns after noting that a signal needs handling. When
interrupts are enabled later on, this pending signals flag is checked, and the
IRQ handlers are called at that point.
The point of this is to reduce the cost of local_irq_save et al, since they
are very much more common than the signals that they are enabling and
disabling. Soft interrupts produce a speed-up of ~25% on a kernel build.
Subtleties -
UML uses sigsetjmp/siglongjmp to switch contexts. sigsetjmp has been
wrapped in a save_flags-like macro which remembers the interrupt state at
setjmp time, and restores it when it is longjmp-ed back to.
The enable_signals function has to loop because the IRQ handler
disables interrupts before returning. enable_signals has to return with
signals enabled, and signals may come in between the disabling and the
return to enable_signals. So, it loops for as long as there are pending
signals, ensuring that signals are enabled when it finally returns, and
that there are no pending signals that need to be dealt with.
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
The serial UML OS-abstraction layer patch (um/kernel/skas dir).
This moves all systemcalls from skas/process.c file under os-Linux dir and
join skas/process.c and skas/process_kern.c files.
Signed-off-by: Gennady Sharapov <gennady.v.sharapov@intel.com>
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Current implementation of boot_timer_handler isn't usable for s390. So I
changed its name to do_boot_timer_handler, taking (struct sigcontext *)sc as
argument. do_boot_timer_handler is called from new boot_timer_handler() in
arch/um/os-Linux/signal.c, which uses the same mechanisms as other signal
handler to find out sigcontext pointer.
Signed-off-by: Bodo Stroesser <bstroesser@fujitsu-siemens.com>
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
The serial UML OS-abstraction layer patch (um/kernel dir).
This moves all systemcalls from time.c file under os-Linux dir and joins
time.c and tine_kernel.c files
Signed-off-by: Gennady Sharapov <Gennady.V.Sharapov@intel.com>
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
The serial UML OS-abstraction layer patch (um/kernel dir).
This moves all systemcalls from signal_user.c file under os-Linux dir
Signed-off-by: Gennady Sharapov <Gennady.V.Sharapov@intel.com>
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
s390 passes parameters in registers. So the only safe way to find out the
address of signal context, error-address and error-type (trap_no), which are
passed to signal handlers as parameters, is to declare these parameters.
So I inserted an subarch-specific macro which holds the declaration of
parameters for signal handlers.
Signed-off-by: Bodo Stroesser <bstroesser@fujitsu-siemens.com>
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!
|