diff options
author | Anton Blanchard <anton@samba.org> | 2011-03-27 14:57:26 +0000 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2011-03-28 22:26:32 -0700 |
commit | a715dea3c8e9ef2771c534e05ee1d36f65987e64 (patch) | |
tree | 7734586473abed25f7ec4b8423adbdba3d829a61 /include | |
parent | 4910ac6c526d2868adcb5893e0c428473de862b5 (diff) |
net: Always allocate at least 16 skb frags regardless of page size
When analysing performance of the cxgb3 on a ppc64 box I noticed that
we weren't doing much GRO merging. It turns out we are limited by the
number of SKB frags:
#define MAX_SKB_FRAGS (65536/PAGE_SIZE + 2)
With a 4kB page size we have 18 frags, but with a 64kB page size we
only have 3 frags.
I ran a single stream TCP bandwidth test to compare the performance of
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/skbuff.h | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 24cfa626931..239083bfea1 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -122,8 +122,14 @@ struct sk_buff_head { struct sk_buff; -/* To allow 64K frame to be packed as single skb without frag_list */ +/* To allow 64K frame to be packed as single skb without frag_list. Since + * GRO uses frags we allocate at least 16 regardless of page size. + */ +#if (65536/PAGE_SIZE + 2) < 16 +#define MAX_SKB_FRAGS 16 +#else #define MAX_SKB_FRAGS (65536/PAGE_SIZE + 2) +#endif typedef struct skb_frag_struct skb_frag_t; |