[PATCH v2 00/28] memcg-aware slab shrinking

Glauber Costa glommer at parallels.com
Tue Apr 2 07:55:15 UTC 2013


On 04/02/2013 08:58 AM, Dave Chinner wrote:
> On Mon, Apr 01, 2013 at 07:38:43AM -0500, Serge Hallyn wrote:
>> Quoting Glauber Costa (glommer at parallels.com):
>>> Hi,
>>>
>>> Notes:
>>> ======
>>>
>>> This is v2 of memcg-aware LRU shrinking. I've been testing it extensively
>>> and it behaves well, at least from the isolation point of view. However,
>>> I feel some more testing is needed before we commit to it. Still, this is
>>> doing the job fairly well. Comments welcome.
>>
>> Do you have any performance tests (preferably with enough runs with and
>> without this patchset to show 95% confidence interval) to show the
>> impact this has?  Certainly the feature sounds worthwhile, but I'm
>> curious about the cost of maintaining this extra state.
> 
> The reason for the node-aware LRUs in the first place is
> performance. i.e. to remove the global LRU locks from the shrinkers
> and LRU list operations. For XFS (at least) the VFS LRU operations
> are significant sources of contention at 16p, and at high CPU counts
> they can basically cause spinlock meltdown.
> 
> I've done performance testing on them on 16p machines with
> fake-numa=4 under such contention generating workloads (e.g. 16-way
> concurrent fsmark workloads) and seen that the LRU locks disappear
> from the profiles. Performance improvement at this size of machine
> under these workloads is still within the run-to-run variance of the
> benchmarks I've run, but the fact the lock is no longer in the
> profiles at all suggest that scalability for larger machines will be
> significantly improved.
> 
> As for the memcg side of things, I'll leave that to Glauber....
> 

I will chime in here about the per-node thing on the memcg side, because
I believe this is the most important design point, and one I'd like to
reach quick agreement on.



More information about the Containers mailing list