[Fuego] Integrating Linaro test-definitions

Tim.Bird at sony.com Tim.Bird at sony.com
Wed Feb 13 06:16:37 UTC 2019



> -----Original Message-----
> From: daniel.sangorrin at toshiba.co.jp
> 
> Hello Fuegonians,
> 
> I am planning to add support for the Linaro test-definitions in Fuego.
> https://github.com/Linaro/test-definitions/

Very nice.  Thanks.

> 
> Currently, I am thinking of a very simple approach that just calls Linaro's test-
> runner.py on the host and accesses the target over ssh (no serial right now
> without LAVA, but it could be added with serio).
> 
> The arguments to test-runner.py would be something like this:
> 	-o use the run log folder to write the test output
> 	-t forward the timeout set by Fuego here (or don't use it)
> 	-g root at ssh (get this from the board file)
> 	-s (skip installing dependencies if the target is not debian/centos, it
> will require a check on the target)
> 	-e (skip collecting information about the environment, this is already
> done by fuego although we dont save the list of packages in the target)
> 	-d the test to run
> 
> Once we have the results on our log folder, we just need to parse them. That
> is very easy because all Linaro test results are written in a very simple and
> easy to parse format. For that reason, I can create a single parser for all tests.
> [Note] Linaro does parsing on the target but the dependencies aren't huge
> (grep,awk,tee).

Tim Orling had a presentation at ELC Europe for upstream patches for
a few board-side tests or test frameworks, that added command line options
to have them emit LAVA-friendly output.  If Fuego supports such output
with our parser, then his work automatically benefits us.

I wanted to circle back to Tim and talk to him about maybe making this
support generic, once we decided on a standardized output.  But it 
really doesn't matter if we have a converter from LAVA-friendly output
to some standard.

> 
> Tim: I see that your presentation "Harmonizing Open Source Definitions" on
> Linaro connect has been accepted. I wonder if you had already some plans
> about it. If you have no objections, I will prepare a patch. It should be quite
> simple, Linaru test-definitions work great and are very simple to understand.
> https://connect.linaro.org/schedule/

No - I haven't finished my analysis yet.  It's on my short list to get ready for
the event.  I already have a few ideas, but didn't think there would be
an implementation or prototype.  That will be very helpful to see how this
"bridge" works out in practice, and to find out what the issues are for sharing
tests between the two frameworks.

> One cool thing about Linaro test-definitons is that they work locally (just run
> the script), remotely (over ssh) and on LAVA (without requiring LAVA
> knowledge on the test developers).
> 
> There are also some caveats. Some tests are hard to use in yocto-like
> filesystems because they use git clone or they require a native compiler on
> the target. But this should be easy to detect in Fuego by doing some
> prechecks on the board and aborting that test.

Yes.  There is very much a different set of assumptions by LAVA about
what the board is capable of, or what can be installed.  In Fuego we mostly
use dependencies to abort the test. Other systems use dependencies
to indicate stuff to install at test execution time.

> Also, it would be nice if the
> tests were divided in predefined functions (e.g.: parse) so that we can skip
> them when we prefer to do that processing on the host side. I guess this will
> need discussion with Linaro test developers.
> 
> Now doing the opposite, that is reusing Fuego tests on a Linaro setup, would
> take a bit more effort. Something like this:
> - add Fuego functions to Linaro's sh-test-lib but modify them to run in a local
> environment or to reuse functions existing in sh-test-lib
> 	- report
> 	- log_compare
> 	- is_on_target
> 	- assert_define
> 	- assert_has_program <-- could be changed to install_deps
> 	- put
> 	- get
> 	- etc..
> - create a run.sh that concatenates various scripts
> 	- export Linaro parameters as Fuego environment variables
> 	- source fuego_test.sh <-- the Fuego test with functions for each
> phase
> 	- call test_run, test_processing.. (like main.sh does)
> 	- add python dependencies for running the parser on the target
> - parser.py
> 	- deploy it together with the parser library (plib)
> 	- get the results as pass/fail/skip and a metric (Linaro only supports
> one metric)
> [Note] an alternative is to substitute the parser by grep/awk as in the current
> Linaro test definitons

It would be good to harmonize this with Li Xiaming's work to package up a
Fuego test as a LAVA job.  I think he had some of the same ideas, and
has prototyped some of this already.

> 
> I think that modularizing Fuego would make the above easier, and it would
> also help reusing Fuego components in other frameworks. Alternatively, we
> could just contribute to Linaro test definitions and use them directly.

The test standards work is hopefully leading us to unit around a format
that will work on multiple platforms.  I hate to say "universal", because there
really are different core assumptions made by the different frameworks.
But I do think that with nudges here and there we can at least share some
tests between the different frameworks.

So I very much welcome this work and look forward to looking at it.
 -- Tim



More information about the Fuego mailing list