[PATCH v9 03/13] memcg: add dirty page accounting infrastructure
gthelen at google.com
Wed Aug 17 23:07:29 PDT 2011
KAMEZAWA Hiroyuki <kamezawa.hiroyu at jp.fujitsu.com> writes:
> On Wed, 17 Aug 2011 09:14:55 -0700
> Greg Thelen <gthelen at google.com> wrote:
>> Add memcg routines to count dirty, writeback, and unstable_NFS pages.
>> These routines are not yet used by the kernel to count such pages. A
>> later change adds kernel calls to these new routines.
>> As inode pages are marked dirty, if the dirtied page's cgroup differs
>> from the inode's cgroup, then mark the inode shared across several
>> Signed-off-by: Greg Thelen <gthelen at google.com>
>> Signed-off-by: Andrea Righi <andrea at betterlinux.com>
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu at jp.fujitsu.com>
> A nitpick..
>> +static inline
>> +void mem_cgroup_move_account_page_stat(struct mem_cgroup *from,
>> + struct mem_cgroup *to,
>> + enum mem_cgroup_stat_index idx)
>> + preempt_disable();
>> + __this_cpu_dec(from->stat->count[idx]);
>> + __this_cpu_inc(to->stat->count[idx]);
>> + preempt_enable();
> without preempt_disable/enable will work. CPU change between dec/inc will
> not be problem.
I agree, but this fix is general cleanup, which seems independent of
memcg dirty accounting. This preemption disable/enable pattern exists
before this patch series in both mem_cgroup_charge_statistics() and
mem_cgroup_move_account(). For consistency we should change both. To
keep the dirty page accounting series simple, I would like to make these
changes outside of this series. On x86 usage of this_cpu_dec/inc looks
equivalent to __this_cpu_inc(), so I assume the only trade off is that
preemptible non-x86 using generic this_this_cpu() will internally
disable/enable preemption in this_cpu_*() operations.
I'll submit a cleanup patch outside of the dirty limit patches for this.
More information about the Containers