[PATCH 6/6] Makes procs file writable to move all threads by tgid at once

Serge E. Hallyn serue at us.ibm.com
Mon Aug 3 12:45:55 PDT 2009

Quoting Serge E. Hallyn (serue at us.ibm.com):
> Quoting Benjamin Blum (bblum at google.com):
> > On Mon, Aug 3, 2009 at 1:54 PM, Serge E. Hallyn<serue at us.ibm.com> wrote:
> > > Quoting Ben Blum (bblum at google.com):
> > > What *exactly* is it we are protecting with cgroup_fork_mutex?
> > > 'fork' (as the name implies) is not a good answer, since we should be
> > > protecting data, not code.  If it is solely tsk->cgroups, then perhaps
> > > we should in fact try switching to (s?)rcu.  Then cgroup_fork() could
> > > just do rcu_read_lock, while cgroup_task_migrate() would make the change
> > > under a spinlock (to protect against concurrent cgroup_task_migrate()s),
> > > and using rcu_assign_pointer to let cgroup_fork() see consistent data
> > > either before or after the update...  That might mean that any checks done
> > > before completing the migrate which involve the # of tasks might become
> > > invalidated before the migration completes?  Seems acceptable (since
> > > it'll be a small overcharge at most and can be quickly remedied).
> > 
> > You'll notice where the rwsem is released - not until cgroup_post_fork
> > or cgroup_fork_failed. It doesn't just protect the tsk->cgroups
> > pointer, but rather guarantees atomicity between adjusting
> > tsk->cgroups and attaching it to the cgroups lists with respect to the
> > critical section in attach_proc. If you've a better name for the lock
> > for such a race condition, do suggest.
> No the name is pretty accurate - it's the lock itself I'm objecting
> to.  Maybe it's the best we can do, though.

This is probably a stupid idea, but...  what about having zero
overhead at clone(), and instead, at cgroup_task_migrate(),
dequeue_task()ing all of the affected threads for the duration of
the migrate?

/me prepares to be hit by blunt objects


More information about the Containers mailing list