memcg creates an unkillable task in 3.11-rc2

Eric W. Biederman ebiederm at xmission.com
Fri Sep 27 00:35:43 UTC 2013


Fabio Kung <fabio.kung at gmail.com> writes:

> On Tue, Jul 30, 2013 at 9:28 AM, Eric W. Biederman
> <ebiederm at xmission.com> wrote:
>>
>> ebiederm at xmission.com (Eric W. Biederman) writes:
>>
>> Ok.  I have been trying for an hour and I have not been able to
>> reproduce the weird hang with the memcg, and it used to be something I
>> could reproduce trivially.  So it appears the patch below is the fix.
>>
>> After I sleep I will see if I can turn it into a proper patch.
>
>
> Contributing with another data point: I am seeing similar issues with
> un-killable tasks inside LXC containers on a vanilla 3.8.11 kernel.
> The stack from zombie tasks look like this:
>
> # cat /proc/12499/stack
> [<ffffffff81186226>] __mem_cgroup_try_charge+0xa96/0xbf0
> [<ffffffff8118670b>] __mem_cgroup_try_charge_swapin+0xab/0xd0
> [<ffffffff8118678d>] mem_cgroup_try_charge_swapin+0x5d/0x70
> [<ffffffff811524f5>] handle_pte_fault+0x315/0xac0
> [<ffffffff81152f11>] handle_mm_fault+0x271/0x3d0
> [<ffffffff815bbf3b>] __do_page_fault+0x20b/0x4c0
> [<ffffffff815bc1fe>] do_page_fault+0xe/0x10
> [<ffffffff815b8718>] page_fault+0x28/0x30
> [<ffffffff81056327>] mm_release+0x127/0x140
> [<ffffffff8105ece1>] do_exit+0x171/0xa70
> [<ffffffff8105f635>] do_group_exit+0x55/0xd0
> [<ffffffff8106fa8f>] get_signal_to_deliver+0x23f/0x5d0
> [<ffffffff81014402>] do_signal+0x42/0x600
> [<ffffffff81014a48>] do_notify_resume+0x88/0xc0
> [<ffffffff815c0b92>] int_signal+0x12/0x17
> [<ffffffffffffffff>] 0xffffffffffffffff
>
> Same symptoms that Eric described: a race condition in memcg when
> there is a page fault and the process is exiting.
>
> I went ahead and reproduced the bug described earlier here on the same
> 3.8.11 kernel, also using the Mesos framework
> (http://mesos.apache.org/) memory Ballooning tests. The call trace
> from zombie tasks in this case look very similar:
>
> # cat /proc/22827/stack
> [<ffffffff81186280>] __mem_cgroup_try_charge+0xaf0/0xbf0
> [<ffffffff8118670b>] __mem_cgroup_try_charge_swapin+0xab/0xd0
> [<ffffffff8118678d>] mem_cgroup_try_charge_swapin+0x5d/0x70
> [<ffffffff811524f5>] handle_pte_fault+0x315/0xac0
> [<ffffffff81152f11>] handle_mm_fault+0x271/0x3d0
> [<ffffffff815bbf3b>] __do_page_fault+0x20b/0x4c0
> [<ffffffff815bc1fe>] do_page_fault+0xe/0x10
> [<ffffffff815b8718>] page_fault+0x28/0x30
> [<ffffffff81056327>] mm_release+0x127/0x140
> [<ffffffff8105ece1>] do_exit+0x171/0xa70
> [<ffffffff8105f635>] do_group_exit+0x55/0xd0
> [<ffffffff8106fa8f>] get_signal_to_deliver+0x23f/0x5d0
> [<ffffffff81014402>] do_signal+0x42/0x600
> [<ffffffff81014a48>] do_notify_resume+0x88/0xc0
> [<ffffffff815c0b92>] int_signal+0x12/0x17
> [<ffffffffffffffff>] 0xffffffffffffffff
>
> Then, I applied Eric's patch below, and I can't reproduce the problem
> anymore. Before the patch, it was very easy to reproduce it with some
> extra memory pressure from other processes in the instance (increasing
> the probability of page faults when processes are exiting).
>
> We also tried a vanilla 3.11.1 kernel, and we could reproduce the bug
> on it pretty easily.

There are some significant fixes in 3.12-rcX.  I haven't had a chance to
look at them in detail yet but they look very promising.

Eric


More information about the Containers mailing list