[cgl_discussion] CGL benchmark
Timothy D. Witham
wookie at osdl.org
Fri Jul 2 08:27:51 PDT 2004
On Fri, 2004-07-02 at 08:05, Eric.Chacron at alcatel.fr wrote:
> During last Paris f2f meeting we have discussed a little bit the advantages
> and cons to include ( and design before ) a benchmark in CGL Linux
I'm not sure if it has to be in the distribution. I feel that
if it comes from a common site then there are advantages. Not that
I believe that anybody would tweek their version of the test
but things like that have happened.
> As a TEM member i see a lot of advantages:
> - better visibility on performances from release to release and from
> architecture to architecture, from vendor to vendor, from distro to
That is true if you can assure that it is the same test. For that
reason I think that any code for the test should be covered under
the Artistic License. The advantage of this is that if somebody
makes a functional change to the code they will no longer be able
to call it whatever name you folks come up with. So if somebody
alters the test to their advantage and publishes under this altered
test then we would have legal recourse to get then to retract their
> - better control of product behavior using benchmark to detect
> defects caused by wrong configuration , options, interaction with drivers,
> priority inversion problems ...
> For a more pragmatic view of the problem , we'd need to define the
> benchmark content with
> non trivial measurements to be included. Let's try to have simple metrics
> but more
> basic than specint for instance.
This is the most important part because you are going to get folks
maximizing for this test and if it really doesn't reflect the way
you deploy your applications then it could hurt your real world
And I have to say that they only people who know what the performance
metrics should be are the TEM's. I actually think that this would be
a good sig and the start of a project.
> For example, i'd like to be able to measure maximum latency, and this is
> not trivial to have a bench
> measuring it as it could happen in telco application context...
> OSDL CGL specifies that carrier grade Linux shall be delivered with a
> standard benchmark enabling to measure the target product performances in a
> glance. Both hardware and Linux system software capacities should be
> The benchmark shall provide applications with different metrics based on
> application profiles.
> It shall include following subsets:
> - Processing: CPU, memory access, thread context switch, system call
> overhead, available memory, ?
> - Real-time: interrupt and scheduling latency, timers resolution/ jiffy
> , ?
> - Communication/network: local socket communication, IP forwarding,
> physical network latency, communication service ( shall be configurable)
> latency / bandwidth and CPU overhead.
> - Storage / file system / mirroring: local disk access, file system
> local access, file system NFS access, mirroring overhead.
> UP and SMP configurations should be specifically handled.
> OSDL CGL specifies that carrier grade Linux shall enable benchmark
> publication and performance comparison of supported protocols.
> At least following protocols shall be analyzed and compared:
> · Physical layers: Ethernet 100 BT / Gigabit Ethernet.
> · Network: IP and IPsec.
> · Transport: TCP and SCTP.
> UP and SMP configurations should be used.
> Analysis should include: message transfer latency, generated CPU load.
> Message size and local / remote location of addresses should be studied, IP
> performances should take into account forwarding route table size.
> cgl_discussion mailing list
> cgl_discussion at lists.osdl.org
Timothy D. Witham - Chief Technology Officer - wookie at osdl.org
Open Source Development Lab Inc - A non-profit corporation
12725 SW Millikan Way - Suite 400 - Beaverton OR, 97005
(503)-626-2455 x11 (office) (503)-702-2871 (cell)
More information about the cgl_discussion