[Ksummit-discuss] [CORE TOPIC] Core Kernel support for Compute-Offload Devices

Jerome Glisse j.glisse at gmail.com
Sat Aug 1 19:08:48 UTC 2015


On Sat, Aug 01, 2015 at 05:57:29PM +0200, Joerg Roedel wrote:
> On Fri, Jul 31, 2015 at 12:13:04PM -0400, Jerome Glisse wrote:
> > Hence scheduling here is different, on GPU it is more about
> > a queue of several thousand thread and you just move things
> > up and down on what need to be executed first. Then GPU have
> > hw scheduling that constantly switch btw active thread this
> > why memory latency is so well hidden on GPU.
> 
> Thats why I wrote "batch"-scheduler in the proposal. Its right that it
> does not make sense to schedule out a GPU process, and some devices do
> scheduling in hardware anyway.
> 
> But the Linux kernel still needs to decide which jobs are sent to the
> offload device in which order, more like an io-scheduler.
> 
> There might be a compute job that only utilizes 60% of the device
> resources, to the in-kernel scheduler could start another job there to
> utilize the other 40%.
> 
> I think its worth a discussion if some common schedulers (like for
> blk-io) make sense here too.

It is definitly worth a discussion but i fear right now there is little
room for anything in the kernel. Hardware scheduling is done is almost
100% hardware. The idea of GPU is that you have 1000 compute unit but
the hardware keep track of 10000 threads and at any point in time there
is huge probability that 1000 of those 10000 threads are ready to compute
something. So if a job is only using 60% of the GPU then the remaining
40% would automaticly be use by the next batch of threads. This is a
simplification as the number of thread the hw can keep track of depend
of several factor and vary from one model to the other even inside same
family of the same manufacturer.

Where kernel have control is which command queue (today GPU have several
command queue than run concurently) can spawn threads inside the GPU.
Also thing like which queue got priority over another one. You even have
mecanism where you can "divide" the GPU among queue (you assign fraction
of the GPU compute unit to a particular queue). Thought i expect this
last one is vanishing.

Also note that many GPU manufacturer are pushing for userspace queue
(i think it is some microsoft requirement) in which case the kernel
have even less control.

I agree that blk-io design is probably closest thing that might fit.


> > I already implemented several version of it and posted for review
> > couple of them. You do not want automatic migration because kernel
> > as not enough informations here.
> 
> Some devices might provide that information, see the extended-access bit
> of Intel VT-d.

This would be limited to integrated GPU and so far only on one platform.
My point was more that userspace have way more informations to make good
decision here. The userspace program is more likely to know what part of
the dataset gonna be repeatedly access by the GPU threads.

Cheers,
Jérôme


More information about the Ksummit-discuss mailing list