[Fuego] Integrating Linaro test-definitions

daniel.sangorrin at toshiba.co.jp daniel.sangorrin at toshiba.co.jp
Fri Feb 8 08:08:52 UTC 2019


Hello Fuegonians,

I am planning to add support for the Linaro test-definitions in Fuego.
https://github.com/Linaro/test-definitions/

Currently, I am thinking of a very simple approach that just calls Linaro's test-runner.py on the host and accesses the target over ssh (no serial right now without LAVA, but it could be added with serio).

The arguments to test-runner.py would be something like this:
	-o use the run log folder to write the test output
	-t forward the timeout set by Fuego here (or don't use it)
	-g root at ssh (get this from the board file)
	-s (skip installing dependencies if the target is not debian/centos, it will require a check on the target)
	-e (skip collecting information about the environment, this is already done by fuego although we dont save the list of packages in the target)
	-d the test to run

Once we have the results on our log folder, we just need to parse them. That is very easy because all Linaro test results are written in a very simple and easy to parse format. For that reason, I can create a single parser for all tests.
[Note] Linaro does parsing on the target but the dependencies aren't huge (grep,awk,tee).

Tim: I see that your presentation "Harmonizing Open Source Definitions" on Linaro connect has been accepted. I wonder if you had already some plans about it. If you have no objections, I will prepare a patch. It should be quite simple, Linaru test-definitions work great and are very simple to understand.
https://connect.linaro.org/schedule/
One cool thing about Linaro test-definitons is that they work locally (just run the script), remotely (over ssh) and on LAVA (without requiring LAVA knowledge on the test developers).

There are also some caveats. Some tests are hard to use in yocto-like filesystems because they use git clone or they require a native compiler on the target. But this should be easy to detect in Fuego by doing some prechecks on the board and aborting that test. Also, it would be nice if the tests were divided in predefined functions (e.g.: parse) so that we can skip them when we prefer to do that processing on the host side. I guess this will need discussion with Linaro test developers.

Now doing the opposite, that is reusing Fuego tests on a Linaro setup, would take a bit more effort. Something like this:
- add Fuego functions to Linaro's sh-test-lib but modify them to run in a local environment or to reuse functions existing in sh-test-lib
	- report
	- log_compare
	- is_on_target
	- assert_define
	- assert_has_program <-- could be changed to install_deps
	- put
	- get
	- etc..
- create a run.sh that concatenates various scripts
	- export Linaro parameters as Fuego environment variables
	- source fuego_test.sh <-- the Fuego test with functions for each phase
	- call test_run, test_processing.. (like main.sh does)
	- add python dependencies for running the parser on the target
- parser.py
	- deploy it together with the parser library (plib)
	- get the results as pass/fail/skip and a metric (Linaro only supports one metric)
[Note] an alternative is to substitute the parser by grep/awk as in the current Linaro test definitons

I think that modularizing Fuego would make the above easier, and it would also help reusing Fuego components in other frameworks. Alternatively, we could just contribute to Linaro test definitions and use them directly.


Thanks,
Daniel







More information about the Fuego mailing list