[Fuego] benchmark metrics changes (was RE: [PATCH] tests.info: reboot has only a time plot at the moment)

Bird, Timothy Tim.Bird at am.sony.com
Thu Nov 17 05:10:21 UTC 2016


> -----Original Message-----
> From: Daniel Sangorrin on Tuesday, October 18, 2016 7:07 PM
> 
> The memory and disk plots are not available so remove them
> until implemented.
> TODO: why are there two tests.info files?

OK, I don't know why there were two tests.info files, but from what
I could see it was a bit of a mess in there.

I did a bunch of research on how tests.info is used, and where the
plotting is done in Fuego, in order to understand how to fix this.
I have put my notes about parsing-related files (including tests.info)
on http://bird.org/fuego/Benchmark_parser_notes

Here are some interesting tidbits - we actually have two completely
different plotting systems in fuego - parser.py uses matplotlib (a python
plotting module) to generate the static plot.png that is linked to from
the Benchmark status page.  A Jenkins plugin called 'flot' (authored by Cogent,
and using the 'flot' jquery plotting module) is used to produce
the dynamic plot that appears on the status page itself.

I'm not sure why there are two different plotting systems, but the matplotlib
(plot.png) one doesn't seem configured right.  It only shows data for one run,
that I can see, and it's zoom factor is way off.

It's the 'flot' system that uses tests.info, and it's structured a bit weird.
I decided to break apart the test.info file, and have changed to a system where the
information that was collected into tests.info now appears in each test's
main directory, as 'metrics.json'.  This makes it so that each test is more
independent of the system.  The dataload.py program, which is used to
populate the files used by the flot system (which appear under /userdata/logs,
by the way), had been modified to copy metrics.json to the appropriate
directory, at runtime.  Everything should be backwards compatible with tests
that exist outside the fuego-core directory and don't have a metrics.json file.
(I'm not sure how many of these there are, but AGL-CIAT probably has several).

This helps move us in the direction of having all meta-data for a test self-contained
in a single per-test directory - instead of scattered around the system like it is now.

This is all in the 'next' branches for fuego and fuego-core on bitbucket.org/tbird20d,
and my plan is to roll it out in the next release, whenever that is.

Dmitry - I hope I haven't broken anything.  Please tell me why tests.info was
structured to have all the metric names in a single file for all the Benchmark tests, when
only one test is plotted at a time, in case there's some need for this that I'm missing.

Thanks,
 -- Tim

P.S. This took a bit longer than expected, as I had to learn about Jenkins plugins,
Javascript, jQuery, and the flot plotting module, before I could make the modifications
for this change.  There was a bit more to learn than I expected.




More information about the Fuego mailing list