[PATCH v2 02/28] vmscan: take at least one pass with shrinkers

Kamezawa Hiroyuki kamezawa.hiroyu at jp.fujitsu.com
Mon Apr 1 07:26:45 UTC 2013


(2013/03/29 18:13), Glauber Costa wrote:
> In very low free kernel memory situations, it may be the case that we
> have less objects to free than our initial batch size. If this is the
> case, it is better to shrink those, and open space for the new workload
> then to keep them and fail the new allocations.
> 
> More specifically, this happens because we encode this in a loop with
> the condition: "while (total_scan >= batch_size)". So if we are in such
> a case, we'll not even enter the loop.
> 
> This patch modifies turns it into a do () while {} loop, that will
> guarantee that we scan it at least once, while keeping the behaviour
> exactly the same for the cases in which total_scan > batch_size.
> 
> Signed-off-by: Glauber Costa <glommer at parallels.com>
> Reviewed-by: Dave Chinner <david at fromorbit.com>
> Reviewed-by: Carlos Maiolino <cmaiolino at redhat.com>
> CC: "Theodore Ts'o" <tytso at mit.edu>
> CC: Al Viro <viro at zeniv.linux.org.uk>
> ---
>   mm/vmscan.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 

Doesn't this break

==
                /*
                 * copy the current shrinker scan count into a local variable
                 * and zero it so that other concurrent shrinker invocations
                 * don't also do this scanning work.
                 */
                nr = atomic_long_xchg(&shrinker->nr_in_batch, 0);
==

This xchg magic ?

Thnks,
-Kame


> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 88c5fed..fc6d45a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -280,7 +280,7 @@ unsigned long shrink_slab(struct shrink_control *shrink,
>   					nr_pages_scanned, lru_pages,
>   					max_pass, delta, total_scan);
>   
> -		while (total_scan >= batch_size) {
> +		do {
>   			int nr_before;
>   
>   			nr_before = do_shrinker_shrink(shrinker, shrink, 0);
> @@ -294,7 +294,7 @@ unsigned long shrink_slab(struct shrink_control *shrink,
>   			total_scan -= batch_size;
>   
>   			cond_resched();
> -		}
> +		} while (total_scan >= batch_size);
>   
>   		/*
>   		 * move the unused scan count back into the shrinker in a
> 




More information about the Containers mailing list