PROBLEM: Processes writing large files in memory-limited LXC container are killed by OOM

Aaron Staley aaron at picloud.com
Mon Jul 1 15:27:01 UTC 2013


Hi Serge,

To reproduce, I need to run ~6 containers in parallel each running that
command. I generally cannot reproduce it with just one running. (Repro
instructions are in original email).

Regards,
Aaron


On Mon, Jul 1, 2013 at 8:21 AM, Serge Hallyn <serge.hallyn at ubuntu.com>wrote:

> Quoting Aaron Staley (aaron at picloud.com):
> > The behavior it fixes sounds similar to what I'm seeing. However, if I
> read
> > the logs correctly, wasn't this committed into Linux 3.5? If so, wouldn't
> > Linux 3.8.0-25-generic #37-Ubuntu SMP (where I can reproduce the problem)
> > already have this fix?
>
> Hi,
>
> I've been trying to reproduce this in ubuntu raring, but couldn't.  I
> started a shell as unprivileged user, and stuck it in a memory cgroup
> with memory.limit_in_bytes set to 10M.  Did dd if=/dev/zero of=/tmp/xxx
> bs=1M count=100M - it was slow, and memory.failcnt hit 1612526, but it
> did in the end succeed.
>
> kernel here is 3.8.0-23-generic #34-Ubuntu, slightly older than you.
> Maybe you're onto a new regression?
>
> -serge
>



-- 
Aaron Staley
*PiCloud, Inc.*


More information about the Containers mailing list