[RFC] IO Controller

Nauman Rafique nauman at google.com
Tue Apr 21 20:10:56 PDT 2009


On Tue, Apr 21, 2009 at 8:04 PM, Gui Jianfeng
<guijianfeng at cn.fujitsu.com> wrote:
> Vivek Goyal wrote:
>> On Fri, Apr 10, 2009 at 05:33:10PM +0800, Gui Jianfeng wrote:
>>> Vivek Goyal wrote:
>>>> Hi All,
>>>>
>>>> Here is another posting for IO controller patches. Last time I had posted
>>>> RFC patches for an IO controller which did bio control per cgroup.
>>>   Hi Vivek,
>>>
>>>   I got the following OOPS when testing, can't reproduce again :(
>>>
>>
>> Hi Gui,
>>
>> Thanks for the report. Will look into it and see if I can reproduce it.
>
>  Hi Vivek,
>
>  The following script can reproduce the bug in my box.
>
> #!/bin/sh
>
> mkdir /cgroup
> mount -t cgroup -o io io /cgroup
> mkdir /cgroup/test1
> mkdir /cgroup/test2
>
> echo cfq > /sys/block/sda/queue/scheduler
> echo 7 > /cgroup/test1/io.ioprio
> echo 1 > /cgroup/test2/io.ioprio
> echo 1 > /proc/sys/vm/drop_caches
> dd if=1000M.1 of=/dev/null &
> pid1=$!
> echo $pid1
> echo $pid1 > /cgroup/test1/tasks
> dd if=1000M.2 of=/dev/null
> pid2=$!
> echo $pid2
> echo $pid2 > /cgroup/test2/tasks
>
>
> rmdir /cgroup/test1
> rmdir /cgroup/test2
> umount /cgroup
> rmdir /cgroup

Yes, this bug happens when we move a task from a cgroup to another
one, and delete the cgroup. Since the actual move to the new cgroup is
performed in a delayed fashion, if the cgroup is removed before
another request from the task is seen (and the actual move is
performed) , it results in a hit on BUG_ON. I am working on a patch
that will solve this problem and a few others; basically it would do
ref counting  for io_group structure. I am having a few problems with
it at the moment; will post the patch as soon as I can get it to work.

>
> --
> Regards
> Gui Jianfeng
>
>
>
>


More information about the Containers mailing list