CGROUP 관련 문의

Tejun Heo tj at kernel.org
Wed Aug 28 13:40:00 UTC 2013


Hey, oleg.

Eunki is reporting a stall in the following loop in
kernel/cgroup.c::cgroup_attach_task()

On Wed, Aug 28, 2013 at 05:19:57AM +0000, 김은기 wrote:
> 
>      ---------------------------------------------------------------------------
>         rcu_read_lock();
>         do {
>                 struct task_and_cgroup ent;
> 
>                 /* @tsk either already exited or can't exit until the end */
>                 if (tsk->flags & PF_EXITING)
>                         continue;
> 
>                 /* as per above, nr_threads may decrease, but not increase. */
>                 BUG_ON(i >= group_size);
>                 ent.task = tsk;
>                 ent.cgrp = task_cgroup_from_root(tsk, root);
>                 /* nothing to do if this task is already in the cgroup */
>                 if (ent.cgrp == cgrp)
>                         continue;
>                 /*
>                  * saying GFP_ATOMIC has no effect here because we did prealloc
>                  * earlier, but it's good form to communicate our expectations.
>                  */
>                 retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
>                 BUG_ON(retval != 0);
>                 i++;
> 
>                 if (!threadgroup)
>                         break;
>         } while_each_thread(leader, tsk);
> ---------------------------------------------------------------------------------------------

where the iteration goes like

  leader -> Task1 -> Task2 -> Task3  -> Task1

ie. leader seems RCU unlinked.  Looking at the users of
while_each_thread(), I'm confused about its locking requirements.
Does it require tasklist_lock or is rcu read lock enough?  If there
are special requirements, it'd be great if it can be described in the
comment above the two macros.

Eunki, can you please post full stack dump from such lockups?  Where
is the function called from?  Is it from attach_task_by_pid() or other
places?  Apparently, we aren't holding threadgroup_lock from other
callsites so the leader isn't guaranteed to be / stay the leader.

Thanks.

-- 
tejun


More information about the Containers mailing list