PROBLEM: Processes writing large files in memory-limited LXC container are killed by OOM

Aaron Staley aaron at picloud.com
Tue Jun 25 20:54:36 UTC 2013


Hi Serge,

Thanks a lot. Would you know of any workarounds outside of forcing every
write to sync to disk (which kills performance)? Perhaps some settings in
the container I can set?  Unfortunately, modifying dirty_background_ratio
 and dirty_expire_centiseconds globally (/etc/sysctl.conf) as suggested by
the serverfault answer will not stop the OOM kills.

Regards,
Aaron


On Tue, Jun 25, 2013 at 6:24 AM, Serge Hallyn <serge.hallyn at ubuntu.com>wrote:

> Quoting Aaron Staley (aaron at picloud.com):
> > This is better explained here:
> >
> http://serverfault.com/questions/516074/why-are-applications-in-a-memory-limited-lxc-container-writing-large-files-to-di
> > (The
> > highest-voted answer believes this to be a kernel bug.)
>
> Yeah, sorry I haven't had time to look more into it, but I'm pretty
> that's the case.  When you sent the previous email I looked quickly at
> the dd source.  I had always assumed that dd looked at available memory
> and malloced as much as it thought it could - but looking at the source,
> it does not in fact do that.  So yes, I think the kernel is simply
> leaving it all in page cache and accounting that to the process which
> then gets OOMed.
>
> Instead, the kernel should be throttling the task while it waits for
> the page cache to be written to disk (since blkio might also be
> slowed down).
>
> -serge
>



-- 
Aaron Staley
*PiCloud, Inc.*


More information about the Containers mailing list