[Fuego] benchmark metrics changes (was RE: [PATCH] tests.info: reboot has only a time plot at the moment)
daniel.sangorrin at toshiba.co.jp
Thu Nov 17 08:56:16 UTC 2016
> -----Original Message-----
> From: Bird, Timothy [mailto:Tim.Bird at am.sony.com]
> > -----Original Message-----
> > From: Daniel Sangorrin on Tuesday, October 18, 2016 7:07 PM
> > The memory and disk plots are not available so remove them
> > until implemented.
> > TODO: why are there two tests.info files?
> OK, I don't know why there were two tests.info files, but from what
> I could see it was a bit of a mess in there.
> I did a bunch of research on how tests.info is used, and where the
> plotting is done in Fuego, in order to understand how to fix this.
> I have put my notes about parsing-related files (including tests.info)
> on http://bird.org/fuego/Benchmark_parser_notes
> Here are some interesting tidbits - we actually have two completely
> different plotting systems in fuego - parser.py uses matplotlib (a python
> plotting module) to generate the static plot.png that is linked to from
> the Benchmark status page. A Jenkins plugin called 'flot' (authored by Cogent,
> and using the 'flot' jquery plotting module) is used to produce
> the dynamic plot that appears on the status page itself.
> I'm not sure why there are two different plotting systems, but the matplotlib
> (plot.png) one doesn't seem configured right. It only shows data for one run,
> that I can see, and it's zoom factor is way off.
Actually, if you try the "reboot" test with my patch, the plot.png
should be working. Please let me know otherwise.
Dhrystone was also working if I remember correctly.
> It's the 'flot' system that uses tests.info, and it's structured a bit weird.
> I decided to break apart the test.info file, and have changed to a system where the
> information that was collected into tests.info now appears in each test's
> main directory, as 'metrics.json'. This makes it so that each test is more
> independent of the system. The dataload.py program, which is used to
> populate the files used by the flot system (which appear under /userdata/logs,
> by the way), had been modified to copy metrics.json to the appropriate
> directory, at runtime. Everything should be backwards compatible with tests
> that exist outside the fuego-core directory and don't have a metrics.json file.
> (I'm not sure how many of these there are, but AGL-CIAT probably has several).
>From a first glance at the AGL-JTA-core repository, they have a "common" folder
that includes almost the same tests as Fuego (some tests are new and some are missing).
Most of them have unmodified parser.py files (apart from the JTA-Fuego renaming).
However, I observe that each test also has a file "create_xml_testname.py". From a quick
read it looks like they are parsing the logs into xml files, and at the same time saving information
such as timing or build number. I haven't checked yet but my guess is they are later
converting the XML files into HTML so they can be read from the Jenkins interface.
# I will check what the XML format looks like next week and report
By the way, most of these "create_xml_testname.py" files have lots of code in common,
with just a few lines modified. Probably it needs to be re-architected.
> This helps move us in the direction of having all meta-data for a test self-contained
> in a single per-test directory - instead of scattered around the system like it is now.
> This is all in the 'next' branches for fuego and fuego-core on bitbucket.org/tbird20d,
> and my plan is to roll it out in the next release, whenever that is.
> Dmitry - I hope I haven't broken anything. Please tell me why tests.info was
> structured to have all the metric names in a single file for all the Benchmark tests, when
> only one test is plotted at a time, in case there's some need for this that I'm missing.
> -- Tim
> P.S. This took a bit longer than expected, as I had to learn about Jenkins plugins,
> for this change. There was a bit more to learn than I expected.
More information about the Fuego