[PATCH][BUGFIX] cgroups: more safe tasklist locking in cgroup_attach_proc

Ben Blum bblum at andrew.cmu.edu
Fri Jul 29 07:28:43 PDT 2011


Fix unstable tasklist locking in cgroup_attach_proc.

From: Ben Blum <bblum at andrew.cmu.edu>

According to this thread - https://lkml.org/lkml/2011/7/27/243 - RCU is
not sufficient to guarantee the tasklist is stable w.r.t. de_thread and
exit. Taking tasklist_lock for reading, instead of rcu_read_lock,
ensures proper exclusion.

Signed-off-by: Ben Blum <bblum at andrew.cmu.edu>
---
 kernel/cgroup.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff a/kernel/cgroup.c b/kernel/cgroup.c
--- a/kernel/cgroup.c	2011-07-21 19:17:23.000000000 -0700
+++ b/kernel/cgroup.c	2011-07-29 06:17:47.000000000 -0700
@@ -2024,7 +2024,7 @@
 		goto out_free_group_list;
 
 	/* prevent changes to the threadgroup list while we take a snapshot. */
-	rcu_read_lock();
+	read_lock(&tasklist_lock);
 	if (!thread_group_leader(leader)) {
 		/*
 		 * a race with de_thread from another thread's exec() may strip
@@ -2033,7 +2033,7 @@
 		 * throw this task away and try again (from cgroup_procs_write);
 		 * this is "double-double-toil-and-trouble-check locking".
 		 */
-		rcu_read_unlock();
+		read_unlock(&tasklist_lock);
 		retval = -EAGAIN;
 		goto out_free_group_list;
 	}
@@ -2054,7 +2054,7 @@
 	} while_each_thread(leader, tsk);
 	/* remember the number of threads in the array for later. */
 	group_size = i;
-	rcu_read_unlock();
+	read_unlock(&tasklist_lock);
 
 	/*
 	 * step 1: check that we can legitimately attach to the cgroup.


More information about the Containers mailing list