[PATCH][BUGFIX] cgroups: more safe tasklist locking in cgroup_attach_proc

Ben Blum bblum at andrew.cmu.edu
Thu Sep 1 14:46:43 PDT 2011


On Mon, Aug 15, 2011 at 08:49:57PM +0200, Oleg Nesterov wrote:
> > -	rcu_read_lock();
> > +	read_lock(&tasklist_lock);
> >  	if (!thread_group_leader(leader)) {
> 
> Agreed, this should work.
> 
> But can't we avoid the global list? thread_group_leader() or not, we do
> not really care. We only need to ensure we can safely find all threads.
> 
> How about the patch below?

I was content with the tasklist_lock because cgroup_attach_proc is
already a pretty heavyweight operation, and probably pretty rare that a
user would want to do multiple of them at once quickly. I asked Andrew
to take the simple tasklist_lock patch just now, since it does fix the
bug at least.

Anyway, looking at this, hmm. I am not sure if this protects adequately?
In de_thread, the sighand lock is held only around the first half
(around zap_other_threads), and not around the following section where
leadership is transferred (esp. around the list_replace calls).
tasklist_lock is held here, though, so it seems like the right lock to
hold.

> 
> 
> With or without this/your patch this leader can die right after we
> drop the lock. ss->can_attach(leader) and ss->attach(leader) look
> suspicious. If a sub-thread execs, this task_struct has nothing to
> do with the threadgroup.

hmm. I thought I had this case covered, but it's been so long since I
actually wrote the code that if I did I can't remember how. I think
exiting is no issue since we hold a reference on the task_struct, but
exec may still be a problem. I'm thinking:

- cgroup_attach_proc drops the tasklist_lock
- a sub-thread execs, and in exec_mmap (after de_thread) changes the mm
- ss->attach, for example in memcg, wants to use leader->mm, which is
  now wrong

this seems to be possible as the code currently is. I wonder if the best
fix is just to have exec (maybe around de_thread) bounce off of or hold
threadgroup_fork_read_lock somewhere?

> 
> 
> 
> Also. This is off-topic, but... Why cgroup_attach_proc() and
> cgroup_attach_task() do ->attach_task() + cgroup_task_migrate()
> in the different order? cgroup_attach_proc() looks wrong even
> if currently doesn't matter.

(already submitted a patch for this)

Thanks,
Ben

> 
> 
> Oleg.
> 
> --- x/kernel/cgroup.c
> +++ x/kernel/cgroup.c
> @@ -2000,6 +2000,7 @@ int cgroup_attach_proc(struct cgroup *cg
>  	/* threadgroup list cursor and array */
>  	struct task_struct *tsk;
>  	struct flex_array *group;
> +	unsigned long flags;
>  	/*
>  	 * we need to make sure we have css_sets for all the tasks we're
>  	 * going to move -before- we actually start moving them, so that in
> @@ -2027,19 +2028,10 @@ int cgroup_attach_proc(struct cgroup *cg
>  		goto out_free_group_list;
>  
>  	/* prevent changes to the threadgroup list while we take a snapshot. */
> -	rcu_read_lock();
> -	if (!thread_group_leader(leader)) {
> -		/*
> -		 * a race with de_thread from another thread's exec() may strip
> -		 * us of our leadership, making while_each_thread unsafe to use
> -		 * on this task. if this happens, there is no choice but to
> -		 * throw this task away and try again (from cgroup_procs_write);
> -		 * this is "double-double-toil-and-trouble-check locking".
> -		 */
> -		rcu_read_unlock();
> -		retval = -EAGAIN;
> +	retval = -EAGAIN;
> +	if (!lock_task_sighand(leader, &flags))
>  		goto out_free_group_list;
> -	}
> +
>  	/* take a reference on each task in the group to go in the array. */
>  	tsk = leader;
>  	i = 0;
> @@ -2055,9 +2047,9 @@ int cgroup_attach_proc(struct cgroup *cg
>  		BUG_ON(retval != 0);
>  		i++;
>  	} while_each_thread(leader, tsk);
> +	unlock_task_sighand(leader, &flags);
>  	/* remember the number of threads in the array for later. */
>  	group_size = i;
> -	rcu_read_unlock();
>  
>  	/*
>  	 * step 1: check that we can legitimately attach to the cgroup.
> 
> 


More information about the Containers mailing list