[Fuego] Status of LTP parser - populating run.json

Daniel Sangorrin daniel.sangorrin at toshiba.co.jp
Thu Aug 10 01:41:52 UTC 2017



> -----Original Message-----
> From: Bird, Timothy [mailto:Tim.Bird at sony.com]
> Sent: Thursday, August 10, 2017 10:11 AM
> To: Daniel Sangorrin; fuego at lists.linuxfoundation.org
> Subject: RE: Status of LTP parser - populating run.json
> 
> 
> 
> > -----Original Message-----
> > From: Daniel Sangorrin on Wednesday, August 09, 2017 5:49 PM
> > > -----Original Message-----
> > > From: Bird, Timothy [mailto:Tim.Bird at sony.com]
> > > I've been doing some testing with LTP and the new parser. I'm only seeing
> > one result show up
> > > in the run.json file.  This is due to a bug in the process_data routine.  When
> > I made it backwards
> > > compatible with the other functional tests, I broke it for LTP.
> > >
> >
> > Don't worry, it was broken already by my changes and waiting for an
> > upgrade.
> >
> > > I think it would be better to move to the LTP parser.py program calling
> > plib.process(), rather than
> > > process_data.  Currently, only old-style functional tests call that routine,
> > with a  single value
> > > provided by generic-parser.py, and old-style benchmark tests, with values
> > that can be made
> > > into measurements.
> >
> > Yes, definitely.
> 
> It's fixed in my 'next' branch - at least LTP and other tests work for me
> (populating run.json with testcase status).
> 
> ...
> 
> >
> > Thanks for the fixes.
> > Now that the parser seems to work for many tests, it would be good to
> > upgrade the LTP to support
> > the new parser natively. The script already uses "test_category" which will
> > become "test_set" and
> > "test_case" which has a direct equivalent. I hope it will not be too much
> > work.
> 
> I didn't change the variable names in LTP's parser.py, but I think it
> does what we want now.  Actually, looking at it just now, I'm fine leaving
> it as "test_category".  I'm not sure exactly what the LTP nomenclature
> for the thing is  - I've seen the wording "test scenario" used.
> 
> LTP is a bit weird, they have multiple test defined in runtests which can call
> lots of sub-tests.  I found many different test scenarios (if that's the right phrase)
> that call syscall-related test programs.  So these things can be mixed and matched.
> 
> I thought about adding the sub-test results as "measures" under the test cases,
> but I'm holding off until I make more progress on the report generation.

I think they are fine as test_cases. Probably you are looking for nested results,
I already implemented a prototype and it was too complicated. The best solution I found
was to collapse that hierarchy in the test_set name through a convention.

For example: results[category1.subcategory2.subsubcategory3] = "PASS"
This can be represented with a test_set called "category1.subcategory2" and a test_case called subsubcategory3.
Just make sure that test_sets do not have 'dots' in their names by convention.

> 
> As I'm working through things, there are a few items I might like to propose
> for the schema, but overall things seem to be working well.
>  -- Tim

Oh I didn't notice the latest commits. Thanks!

By the way, there is no reference.json or criteria.json files yet.
Do you think we should add them?

- The reference.json is useful to know what tests were skipped either because
the result was CONF/BROK or because they were in the skiplist for example.
- The criteria.json would probably need to be customized for each board,
but a default one may be useful as well.

Other than that, I think there is a way for the schema to allow adding new
arbitrary data to an entry. With that, we can also add test-dependent information
such as the LTP errtype value, or the messages generated on test cases that ended up
in INFO/WARN/CONF/BROK for example. In the current schema, we have "attachments"
but this is not enough.

Thanks,
Daniel

 










More information about the Fuego mailing list