diff options
author | Eric Dumazet <eric.dumazet@gmail.com> | 2010-05-06 23:51:21 +0000 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2010-05-17 17:18:50 -0700 |
commit | ebda37c27d0c768947e9b058332d7ea798210cf8 (patch) | |
tree | 1c34bd9f9c2a87dcd150ad1fcc46a3adc6bb7ca2 /net | |
parent | 3f78d1f210ff89af77f042ab7f4a8fee39feb1c9 (diff) |
rps: avoid one atomic in enqueue_to_backlog
If CONFIG_SMP=y, then we own a queue spinlock, we can avoid the atomic
test_and_set_bit() from napi_schedule_prep().
We now have same number of atomic ops per netif_rx() calls than with
pre-RPS kernel.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r-- | net/core/dev.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/net/core/dev.c b/net/core/dev.c index 988e42912e7..cdcb9cbedf4 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -2432,8 +2432,10 @@ enqueue: return NET_RX_SUCCESS; } - /* Schedule NAPI for backlog device */ - if (napi_schedule_prep(&sd->backlog)) { + /* Schedule NAPI for backlog device + * We can use non atomic operation since we own the queue lock + */ + if (!__test_and_set_bit(NAPI_STATE_SCHED, &sd->backlog.state)) { if (!rps_ipi_queued(sd)) ____napi_schedule(sd, &sd->backlog); } |