[Fuego] Discussion about Fuego unified results format

Bird, Timothy Tim.Bird at sony.com
Wed Apr 26 22:51:41 UTC 2017



> -----Original Message-----
> From: Kevin on Thursday, April 20, 2017 11:13 AM
> "Bird, Timothy" <Tim.Bird at sony.com> writes:
> 
> > Based on discussion in our call yesterday, I have a few notes I'd like to
> make.
> > First - for those not involved in the call, here is some introduction.  We are
> > working, for the 1.2 release, on unifying the results format produced by
> Fuego
> > tests, to make report generation easier, and support (ultimately) multiple
> > report output formats (e.g. HTML, XML, PDF, Excel).  The intent is to
> support
> > all current AGL-JTA reports, previous Fuego reports, and new ones
> envisioned
> > for the system (in the 1.2 release).
> >
> > Note from the call and about this work are at:
> > http://bird.org/fuego/Unified_Results_Format_Project
> 
> Has anyone looked at the JSON schema we designed for kernelci.org[1]? 
 I looked at it before starting some of the work on our run.json file,
and before I did some of the functional results prototyping work that
was based on Song Cai's recommendations.

However I did not study it in depth.  I planned to revisit it before
finalizing our json formats - in particular to review the rationale
for any differences between Fuego and kernelci schemas.
So far we've been kind of throwing mud at the wall
and seeing what sticks.  My run.json file is mostly based on fields
from Jenkins build.xml files, with some extra stuff I thought was needed.
However, I haven't formally studied the different extant schemas yet.

> I
> wasn't involved in the meetings or call, but from what I can see, one
> important thing missing from your current proposal is how to group
> related tests together.  In the schema we did for kCI, you can have test
> cases grouped into test sets, which in turn can be grouped into a test
> suite.  IMO, this is crucial since most tests come as part of a larger
> suite.
How are these groupings done?  Is this something that is based on a particular
test execution (run), or something that is defined outside of a particular run?
Are they statically defined, or defined by a particular test execution?

Or is this more of a "view" of the database of test results, which are kept
in their own files?

Does an end-user (tester) define these groupings or does the test creator?

There appear to be three levels of groups:
 * test suites
 * test sets
 * test cases

Does a test case represent an individual sub-unit of a test (for functional
tests), or a whole set of value results?   So, where would LTP's syscall kill10 fit?
I presume LTP would be the suite, syscall the set and kill10 the test case?
(I just have kill10 on my mind since it's been giving me problems).

> 
> The kernelci.org project has prototyped this for several testsuites
> (kselftest, hackbench, lmbench, LTP, etc.) and were pushing JSON results
> using this /test API to our backend for awhile.  But nobody got around
> to writing the parsing, reporting stuff yet.

Do you have an example result files in json format you could share?
I'm particularly interested in how the different results from something
simple like hackbench would compare with the complex results of an LTP run.

> 
> All of that to say the kCI JSON schema has been through through quite a
> bit already, and acually used for several test suites, so I think it
> would be a good idea to start with that and extend it, so combined
> efforts on this JSON test schema could benefit Fuego as well as other
> projects.

I think it would be great to be able to interchange data, as well as tools,
between the projects.  So we'll definitely take a look.
 -- Tim

> [1] https://api.kernelci.org/schema-test.html
>     https://api.kernelci.org/collection-test.html


More information about the Fuego mailing list