summaryrefslogtreecommitdiffstats
path: root/net/sched
diff options
context:
space:
mode:
authorJesper Dangaard Brouer <brouer@redhat.com>2014-10-01 22:36:09 +0200
committerDavid S. Miller <davem@davemloft.net>2014-10-03 12:37:06 -0700
commit808e7ac0bdef31204184904f6b3ea356a30a9ed5 (patch)
treeee3dc48d33d56e11df19b52c33abf2ac85667079 /net/sched
parent5772e9a3463b264cee5a4e73ef586ad482d7ba48 (diff)
qdisc: dequeue bulking also pickup GSO/TSO packets
The TSO and GSO segmented packets already benefit from bulking on their own. The TSO packets have always taken advantage of the only updating the tailptr once for a large packet. The GSO segmented packets have recently taken advantage of bulking xmit_more API, via merge commit 53fda7f7f9e8 ("Merge branch 'xmit_list'"), specifically via commit 7f2e870f2a4 ("net: Move main gso loop out of dev_hard_start_xmit() into helper.") allowing qdisc requeue of remaining list. And via commit ce93718fb7cd ("net: Don't keep around original SKB when we software segment GSO frames."). This patch allow further bulking of TSO/GSO packets together, when dequeueing from the qdisc. Testing: Measuring HoL (Head-of-Line) blocking for TSO and GSO, with netperf-wrapper. Bulking several TSO show no performance regressions (requeues were in the area 32 requeues/sec). Bulking several GSOs does show small regression or very small improvement (requeues were in the area 8000 requeues/sec). Using ixgbe 10Gbit/s with GSO bulking, we can measure some additional latency. Base-case, which is "normal" GSO bulking, sees varying high-prio queue delay between 0.38ms to 0.47ms. Bulking several GSOs together, result in a stable high-prio queue delay of 0.50ms. Using igb at 100Mbit/s with GSO bulking, shows an improvement. Base-case sees varying high-prio queue delay between 2.23ms to 2.35ms Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched')
-rw-r--r--net/sched/sch_generic.c12
1 files changed, 3 insertions, 9 deletions
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index c2e87e63b83..797ebef7364 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -63,10 +63,6 @@ static struct sk_buff *try_bulk_dequeue_skb(struct Qdisc *q,
struct sk_buff *skb, *tail_skb = head_skb;
while (bytelimit > 0) {
- /* For now, don't bulk dequeue GSO (or GSO segmented) pkts */
- if (tail_skb->next || skb_is_gso(tail_skb))
- break;
-
skb = q->dequeue(q);
if (!skb)
break;
@@ -76,11 +72,9 @@ static struct sk_buff *try_bulk_dequeue_skb(struct Qdisc *q,
if (!skb)
break;
- /* "skb" can be a skb list after validate call above
- * (GSO segmented), but it is okay to append it to
- * current tail_skb->next, because next round will exit
- * in-case "tail_skb->next" is a skb list.
- */
+ while (tail_skb->next) /* GSO list goto tail */
+ tail_skb = tail_skb->next;
+
tail_skb->next = skb;
tail_skb = skb;
}