[lxc-devel] Poor bridging performance on 10 GbE

Daniel Lezcano dlezcano at fr.ibm.com
Thu Mar 19 02:08:00 PDT 2009


Ryousei Takano wrote:
> Hi Eric,
> 
> On Thu, Mar 19, 2009 at 9:50 AM, Eric W. Biederman
> <ebiederm at xmission.com> wrote:
> 
> [snip]
> 
>> Bridging last I looked uses the least common denominator of hardware
>> offloads.  Which likely explains why adding a veth decreased your
>> bridging performance.
>>
> At least now LRO cannot coexist bridging.
> So I disable the LRO feature of the myri10ge driver.
> 
>>>>> Here is my experimental setting:
>>>>>        OS: Ubuntu server 8.10 amd64
>>>>>        Kernel: 2.6.27-rc8 (checkout from the lxc git repository)
>>>> I would recommend to use the 2.6.29-rc8 vanilla because this kernel does no
>>>> longer need patches, a lot of fixes were done in the network namespace and
>>>> maybe the bridge has been improved in the meantime :)
>>>>
>>> I checked out the 2.6.29-rc8 vanilla kernel.
>>> The performance after issuing lxc-start improved to 8.7 Gbps!
>>> It's a big improvement, while some performance loss remains.
>>> Can not we avoid this loss?
>> Good question.  Any chance you can profile this and see where the
>> performance loss seems to be coming from?
>>
> I found out this issue is caused by decreasing the MTU size.
> Myri-10G's MTU size is 9000 bytes; the veth' MTU size is 1500 bytes.
> After bridging veth, MTU size decreases from 9000 to 1500 bytes.
> I changed the veth's MTU size to 9000 bytes, and then I confirmed
> the throughput improved to 9.6 Gbps.
> 
> The throughput between LXC containers also improved to 4.9 Gbps
> by changing the MTU sizes.
> 
> So I propose to add lxc.network.mtu into the LXC configuration.
> How does that sound?

Sounds good :)
Do you plan to send a patch ?


More information about the Containers mailing list