[PATCH 11/11] cgroup: use percpu refcnt for cgroup_subsys_states

Glauber Costa glommer at gmail.com
Fri Jun 14 14:15:12 UTC 2013


On Fri, Jun 14, 2013 at 02:55:39PM +0200, Michal Hocko wrote:
> On Wed 12-06-13 21:04:58, Tejun Heo wrote:
> [...]
> > +/**
> > + * cgroup_destroy_locked - the first stage of cgroup destruction
> > + * @cgrp: cgroup to be destroyed
> > + *
> > + * css's make use of percpu refcnts whose killing latency shouldn't be
> > + * exposed to userland and are RCU protected.  Also, cgroup core needs to
> > + * guarantee that css_tryget() won't succeed by the time ->css_offline() is
> > + * invoked.  To satisfy all the requirements, destruction is implemented in
> > + * the following two steps.
> > + *
> > + * s1. Verify @cgrp can be destroyed and mark it dying.  Remove all
> > + *     userland visible parts and start killing the percpu refcnts of
> > + *     css's.  Set up so that the next stage will be kicked off once all
> > + *     the percpu refcnts are confirmed to be killed.
> > + *
> > + * s2. Invoke ->css_offline(), mark the cgroup dead and proceed with the
> > + *     rest of destruction.  Once all cgroup references are gone, the
> > + *     cgroup is RCU-freed.
> > + *
> > + * This function implements s1.  After this step, @cgrp is gone as far as
> > + * the userland is concerned and a new cgroup with the same name may be
> > + * created.  As cgroup doesn't care about the names internally, this
> > + * doesn't cause any problem.
> 
> Glauber is this asumption correct for kmem caches naming scheme?
> I guess it should, but I would rather be sure this won't blow up later
> specially when caches might live longer than css_offline.
> 

We append names to caches, but never the name alone, always (kmemcg_id:name)
So you can reuse the same name as many times as you want (from the kmemcg
perspective), provided you use different kmemcg_ids. Because kmemcg_ids are
dissociated from css_ids (partly because I was already seeing people talking
about wanting to free the css_ids earlier), we should not have any problems
with this.




More information about the Containers mailing list