Protection against container fork bombs [WAS: Re: memcg with kmem limit doesn't recover after disk i/o causes limit to be hit]

Dwight Engen dwight.engen at oracle.com
Wed May 7 17:15:14 UTC 2014


On Tue, 06 May 2014 14:40:55 +0300
Marian Marinov <mm at yuhu.biz> wrote:

> On 04/23/2014 03:49 PM, Dwight Engen wrote:
> > On Wed, 23 Apr 2014 09:07:28 +0300
> > Marian Marinov <mm at yuhu.biz> wrote:
> >
> >> On 04/22/2014 11:05 PM, Richard Davies wrote:
> >>> Dwight Engen wrote:
> >>>> Richard Davies wrote:
> >>>>> Vladimir Davydov wrote:
> >>>>>> In short, kmem limiting for memory cgroups is currently broken.
> >>>>>> Do not use it. We are working on making it usable though.
> >>> ...
> >>>>> What is the best mechanism available today, until kmem limits
> >>>>> mature?
> >>>>>
> >>>>> RLIMIT_NPROC exists but is per-user, not per-container.
> >>>>>
> >>>>> Perhaps there is an up-to-date task counter patchset or similar?
> >>>>
> >>>> I updated Frederic's task counter patches and included Max
> >>>> Kellermann's fork limiter here:
> >>>>
> >>>> http://thread.gmane.org/gmane.linux.kernel.containers/27212
> >>>>
> >>>> I can send you a more recent patchset (against 3.13.10) if you
> >>>> would find it useful.
> >>>
> >>> Yes please, I would be interested in that. Ideally even against
> >>> 3.14.1 if you have that too.
> >>
> >> Dwight, do you have these patches in any public repo?
> >>
> >> I would like to test them also.
> >
> > Hi Marian, I put the patches against 3.13.11 and 3.14.1 up at:
> >
> > git://github.com/dwengen/linux.git cpuacct-task-limit-3.13
> > git://github.com/dwengen/linux.git cpuacct-task-limit-3.14
> >
> Guys I tested the patches with 3.12.16. However I see a problem with
> them.
> 
> Trying to set the limit to a cgroup which already have processes in
> it does not work:

This is a similar check/limitation to the one for kmem in memcg, and is
done here to keep the res_counters consistent and from going negative.
It could probably be relaxed slightly by using res_counter_set_limit()
instead, but you would still need to initially set a limit before
adding tasks to the group.

> [root at sp2 lxc]# echo 50 > cpuacct.task_limit
> -bash: echo: write error: Device or resource busy
> [root at sp2 lxc]# echo 0 > cpuacct.task_limit
> -bash: echo: write error: Device or resource busy
> [root at sp2 lxc]#
> 
> I have even tried to remove this check:
> +               if (cgroup_task_count(cgrp)
> || !list_empty(&cgrp->children))
> +                       return -EBUSY;
> But still give me 'Device or resource busy'.
> 
> Any pointers of why is this happening ?
> 
> Marian
> 
> >> Marian
> >>
> >>>
> >>> Thanks,
> >>>
> >>> Richard.
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe cgroups"
> >>> in the body of a message to majordomo at vger.kernel.org
> >>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>>
> >>>
> >>
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe cgroups"
> > in the body of a message to majordomo at vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> >
> 



More information about the Containers mailing list