[Fuego] status update

Daniel Sangorrin daniel.sangorrin at toshiba.co.jp
Tue Jun 27 06:04:17 UTC 2017


Hi all,

I've working on the parsing code. Here is a list of things that I managed to complete
and things that need some discussion:

Completed tasks:
- Output a single run.json file for each job's run/build that captures both the metadata and the test results.
- Merge run.json files for a test_suite into a single results.json file that flot can visualize.
   + I have added concurrency locks to protect results.json from concurrent writes.
- Add HTML output support to flot (similar, but not exactly the same yet, to the one in AGL JTA).
- Fixed the test Functional.tiff (AGL test) and confirmed that it works on docker and beaglebone black.
   + There are many more AGL tests to fix.
- Fixed several bug fixes that occurred when a test fails in an unexpected way.

Discarded tasks:
- Output a jUnit XML file so that the plugin "Test Results Analyzer" can display.
  + This is working but it isn't as flexible as I'd like. The new flot's HTML output support that I added should deprecate it.
- Ability to download a PNG/SVG from the flot plugin directly.
  + I managed to get this working by using the canvas2image library or using the canvas.toDataURL interface. Unfortunately, flot  doesn't store the axes' information in the canvas so only the plotting space is saved. There is a library to accomplish this task [1] that but there seems to be a version mismatch with the javascript libraries in fuego and it didn't work. I decided to postpone this.

Pending tasks:
- Add report functionality
   + I have removed the generation of plot.png files. In part because I want to do that directly from the
       flot plugin, and in part because I think it is more useful if we integrate it in the future ftc report command,
- Add more information to the run.json files.
   + I am trying to produce a schema that is very close to the one used in Kernel.CI. Probably I can make it compatible.
   + There is some information that needs to be added to Fuego. Unfortunately, I will probably have to fix a lot of files:
       1) Each test_case (remember test_suite > test_set > test_case) should be able to store a list of
            measurements. Each measurement would consist of a name, value, units, duration and maybe more
            such as error messages specific to the test_case or expected values (e.g. expected to fail, or expected to
            be greater than 50).
       2) Each test_case should have a status (PASS, FAIL..) that is not necessarily the same as the test_set/suite status.
   + Add vcs_commit information (git or tarball information)
- Handle failed runs better. Sometimes the test fails very early, before even building it.
- I am not sure what to do with the "reference.log" files
   + Currently they are used to store thresholds, but these are probably board dependent.
   + This is probably related to the discussion with Rafael about parameterized builds. We should
       be able to define the threshold for a specific test_case's measure.
- Remove testplans?
   + I was thinking that we can substitute testplans by custom scripts that call ftc add-jobs
- Create a staging folder for tests that do not work or files that are not used.
   + Or maybe at least list them up on the wiki.

Thanks,
Daniel

[1] https://github.com/markrcote/flot-axislabels




More information about the Fuego mailing list