[PATCH 4/4] cgroup: remove extra calls to find_existing_css_set

Frederic Weisbecker fweisbec at gmail.com
Thu Dec 22 09:44:39 UTC 2011


On Thu, Dec 22, 2011 at 01:50:46PM +0800, Li Zefan wrote:
> > @@ -2091,6 +2010,10 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> >  	 * rcu or tasklist locked. instead, build an array of all threads in the
> >  	 * group - group_rwsem prevents new threads from appearing, and if
> >  	 * threads exit, this will just be an over-estimate.
> > +	 *
> > +	 * While creating the list, also make sure css_sets exist for all
> > +	 * threads to be migrated. we use find_css_set, which allocates a new
> > +	 * one if necessary.
> >  	 */
> >  	group_size = get_nr_threads(leader);
> >  	/* flex_array supports very large thread-groups better than kmalloc. */
> > @@ -2137,6 +2060,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> >  		/* nothing to do if this task is already in the cgroup */
> >  		if (ent.cgrp == cgrp)
> >  			continue;
> > +		ent.cg = find_css_set(tsk->cgroups, cgrp);
> 
> unfortunately This won't work, because we are holding tasklist_lock.

I believe we can remove tasklist_lock now (in a seperate patch).

It was there in order to protect while_each_thread() against exec but
now we have threadgroup_lock().

I think we only need to use rcu_read_lock() to protect against concurrent
removal in exit.
 
> > +		if (!ent.cg) {
> > +			retval = -ENOMEM;
> > +			group_size = i;
> > +			goto out_list_teardown;
> > +		}
> >  		retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
> >  		BUG_ON(retval != 0);
> >  		i++;


More information about the Containers mailing list