IO scheduler based IO Controller V2

Fabio Checconi fchecconi at gmail.com
Wed May 6 03:20:30 PDT 2009


Hi,

> From: Balbir Singh <balbir at linux.vnet.ibm.com>
> Date: Wed, May 06, 2009 09:12:54AM +0530
>
> * Peter Zijlstra <peterz at infradead.org> [2009-05-06 00:20:49]:
> 
> > On Tue, 2009-05-05 at 13:24 -0700, Andrew Morton wrote:
> > > On Tue,  5 May 2009 15:58:27 -0400
> > > Vivek Goyal <vgoyal at redhat.com> wrote:
> > > 
> > > > 
> > > > Hi All,
> > > > 
> > > > Here is the V2 of the IO controller patches generated on top of 2.6.30-rc4.
> > > > ...
> > > > Currently primarily two other IO controller proposals are out there.
> > > > 
> > > > dm-ioband
> > > > ---------
> > > > This patch set is from Ryo Tsuruta from valinux.
> > > > ...
> > > > IO-throttling
> > > > -------------
> > > > This patch set is from Andrea Righi provides max bandwidth controller.
> > > 
> > > I'm thinking we need to lock you guys in a room and come back in 15 minutes.
> > > 
> > > Seriously, how are we to resolve this?  We could lock me in a room and
> > > cmoe back in 15 days, but there's no reason to believe that I'd emerge
> > > with the best answer.
> > > 
> > > I tend to think that a cgroup-based controller is the way to go. 
> > > Anything else will need to be wired up to cgroups _anyway_, and that
> > > might end up messy.
> > 
> > FWIW I subscribe to the io-scheduler faith as opposed to the
> > device-mapper cult ;-)
> > 
> > Also, I don't think a simple throttle will be very useful, a more mature
> > solution should cater to more use cases.
> >
> 
> I tend to agree, unless Andrea can prove us wrong. I don't think
> throttling a task (not letting it consume CPU, memory when its IO
> quota is exceeded) is a good idea. I've asked that question to Andrea
> a few times, but got no response.
>  

  from what I can see, the principle used by io-throttling is not too
different to what happens when bandwidth differentiation with synchronous
access patterns is achieved using idling at the io scheduler level.

When an io scheduler anticipates requests from a task/cgroup, all the
other tasks with pending (synchronous) requests are in fact blocked, and
the fact that the task being anticipated is allowed to submit additional
io while they remain blocked is what creates the bandwidth differentiation
among them.

Of course there are many differences, in particular related to the
latencies introduced by the two mechanisms, the granularity they use to
allocate disk service, and to what throttling and proportional share io
scheduling can or cannot guarantee, but FWIK both of them rely on
blocking tasks to create bandwidth differentiation.


More information about the Containers mailing list