[cgl_discussion] CGL benchmark
Eric.Chacron at alcatel.fr
Eric.Chacron at alcatel.fr
Fri Jul 9 07:53:32 PDT 2004
>I'm not sure if it has to be in the distribution. I feel that
>if it comes from a common site then there are advantages. Not that
>I believe that anybody would tweek their version of the test
>but things like that have happened.
The distribution could be the right place in order to simplify the
The benchmark is a piece of software like the others, it has to be
So why not having it included in the distro ?
>This is the most important part because you are going to get folks
>maximizing for this test and if it really doesn't reflect the way
>you deploy your applications then it could hurt your real world
I understand your point, but what are they going to optimize today
if we dont put anything as target ?
>And I have to say that they only people who know what the performance
>metrics should be are the TEM's.
We know what we need but the OS developpers know better which part they
have optimized and the generic layer ( i mean the
VFS and other stuff generic abstract layers ) / performance tradeoff.
The benchmarks must reflect every sensitive Operating System features
( context switch, stack performances ... ). Then it could be configured
to be more sensitive to a specific domain ( like real-time or staorage
>I actually think that this would be
>a good sig and the start of a project.
I'm fine with that. Who's the volunteer for starting this sig ?
"Timothy D. Witham" <wookie at osdl.org>@lists.osdl.org on 07/02/2004 05:27:51
Sent by: cgl_discussion-bounces at lists.osdl.org
To: Eric CHACRON/FR/ALCATEL at ALCATEL
cc: cgl_discussion at osdl.org
Subject: Re: [cgl_discussion] CGL benchmark
On Fri, 2004-07-02 at 08:05, Eric.Chacron at alcatel.fr wrote:
> During last Paris f2f meeting we have discussed a little bit the
> and cons to include ( and design before ) a benchmark in CGL Linux
I'm not sure if it has to be in the distribution. I feel that
if it comes from a common site then there are advantages. Not that
I believe that anybody would tweek their version of the test
but things like that have happened.
> As a TEM member i see a lot of advantages:
> - better visibility on performances from release to release and from
> architecture to architecture, from vendor to vendor, from distro to
That is true if you can assure that it is the same test. For that
reason I think that any code for the test should be covered under
the Artistic License. The advantage of this is that if somebody
makes a functional change to the code they will no longer be able
to call it whatever name you folks come up with. So if somebody
alters the test to their advantage and publishes under this altered
test then we would have legal recourse to get then to retract their
> - better control of product behavior using benchmark to detect
> defects caused by wrong configuration , options, interaction with
> priority inversion problems ...
> For a more pragmatic view of the problem , we'd need to define the
> benchmark content with
> non trivial measurements to be included. Let's try to have simple metrics
> but more
> basic than specint for instance.
This is the most important part because you are going to get folks
maximizing for this test and if it really doesn't reflect the way
you deploy your applications then it could hurt your real world
And I have to say that they only people who know what the performance
metrics should be are the TEM's. I actually think that this would be
a good sig and the start of a project.
> For example, i'd like to be able to measure maximum latency, and this is
> not trivial to have a bench
> measuring it as it could happen in telco application context...
> OSDL CGL specifies that carrier grade Linux shall be delivered with a
> standard benchmark enabling to measure the target product performances in
> glance. Both hardware and Linux system software capacities should be
> The benchmark shall provide applications with different metrics based on
> application profiles.
> It shall include following subsets:
> - Processing: CPU, memory access, thread context switch, system call
> overhead, available memory, ?
> - Real-time: interrupt and scheduling latency, timers resolution/
> , ?
> - Communication/network: local socket communication, IP forwarding,
> physical network latency, communication service ( shall be configurable)
> latency / bandwidth and CPU overhead.
> - Storage / file system / mirroring: local disk access, file system
> local access, file system NFS access, mirroring overhead.
> UP and SMP configurations should be specifically handled.
> OSDL CGL specifies that carrier grade Linux shall enable benchmark
> publication and performance comparison of supported protocols.
> At least following protocols shall be analyzed and compared:
> · Physical layers: Ethernet 100 BT / Gigabit Ethernet.
> · Network: IP and IPsec.
> · Transport: TCP and SCTP.
> UP and SMP configurations should be used.
> Analysis should include: message transfer latency, generated CPU load.
> Message size and local / remote location of addresses should be studied,
> performances should take into account forwarding route table size.
> cgl_discussion mailing list
> cgl_discussion at lists.osdl.org
Timothy D. Witham - Chief Technology Officer - wookie at osdl.org
Open Source Development Lab Inc - A non-profit corporation
12725 SW Millikan Way - Suite 400 - Beaverton OR, 97005
(503)-626-2455 x11 (office) (503)-702-2871 (cell)
cgl_discussion mailing list
cgl_discussion at lists.osdl.org
More information about the cgl_discussion