How much of a mess does OpenVZ make? ;) Was: What can OpenVZ do?

Joseph Ruscio jruscio at
Sat Mar 14 10:11:11 PDT 2009

On Mar 14, 2009, at 1:25 AM, Ingo Molnar wrote:

> * Alexey Dobriyan <adobriyan at> wrote:
>> On Fri, Mar 13, 2009 at 02:01:50PM -0700, Linus Torvalds wrote:
>>> On Fri, 13 Mar 2009, Alexey Dobriyan wrote:
>>>>> Let's face it, we're not going to _ever_ checkpoint any
>>>>> kind of general case process. Just TCP makes that
>>>>> fundamentally impossible in the general case, and there
>>>>> are lots and lots of other cases too (just something as
>>>>> totally _trivial_ as all the files in the filesystem
>>>>> that don't get rolled back).
>>>> What do you mean here? Unlinked files?
>>> Or modified files, or anything else. "External state" is a
>>> pretty damn wide net. It's not just TCP sequence numbers and
>>> another machine.
>> I think (I think) you're seriously underestimating what's
>> doable with kernel C/R and what's already done.
>> I was told (haven't seen it myself) that Oracle installations
>> and Counter Strike servers were moved between boxes just fine.
>> They were run in specially prepared environment of course, but
>> still.
> That's the kind of stuff i'd like to see happen.
> Right now the main 'enterprise' approach to do
> migration/consolidation of server contexts is based on hardware
> virtualization - but that pushes runtime overhead to the native
> kernel and slows down the guest context as well - massively so.
> Before we've blinked twice it will be a 'required' enterprise
> feature and enterprise people will measure/benchmark Linux
> server performance in guest context primarily and we'll have a
> deep performance pit to dig ourselves out of.
> We can ignore that trend as uninteresting (it is uninteresting
> in a number of ways because it is partly driven by stupidity),
> or we can do something about it while still advancing the
> kernel.

I'd tend to echo these comments. I don't think you can underestimate  
how many workloads are stuck in VM's (or under consideration for such)  
mainly in order to containerize them and make them mobile. Right now  
VM's are the only hammer, so every virtualization scenario looks like  
a nail. As an extreme example, some of the National Labs are  
experimenting with VM's to checkpoint long-running jobs or live- 
migrate a part of a job off a machine throwing hardware errors (soon  
to fail). They're trying this approach even though VM's can add a  
significant overhead (in the I/O path), typically considered the third  
rail in HPC.

KVM is a step in the right direction, because we can now locate some  
number of VM's with a native workload, but the OpenVZ guys have shown  
that you can achieve much higher densities with an OS Virtualization  
container approach.


More information about the Containers mailing list