[Fuego] Integrating Linaro test-definitions

Tim.Bird at sony.com Tim.Bird at sony.com
Wed Feb 13 06:25:51 UTC 2019


> -----Original Message-----
> From: Milosz Wasilewski
> 
> On Fri, 8 Feb 2019 at 08:10, <daniel.sangorrin at toshiba.co.jp> wrote:
> >
> > Hello Fuegonians,
> >
> > I am planning to add support for the Linaro test-definitions in Fuego.
> > https://github.com/Linaro/test-definitions/
> >
> > Currently, I am thinking of a very simple approach that just calls Linaro's
> test-runner.py on the host and accesses the target over ssh (no serial right
> now without LAVA, but it could be added with serio).
> >
> > The arguments to test-runner.py would be something like this:
> >         -o use the run log folder to write the test output
> >         -t forward the timeout set by Fuego here (or don't use it)
> >         -g root at ssh (get this from the board file)
> >         -s (skip installing dependencies if the target is not debian/centos, it will
> require a check on the target)
> >         -e (skip collecting information about the environment, this is already
> done by fuego although we dont save the list of packages in the target)
> >         -d the test to run
> >
> > Once we have the results on our log folder, we just need to parse them.
> That is very easy because all Linaro test results are written in a very simple
> and easy to parse format. For that reason, I can create a single parser for all
> tests.
> > [Note] Linaro does parsing on the target but the dependencies aren't huge
> (grep,awk,tee).
> >
> > Tim: I see that your presentation "Harmonizing Open Source Definitions"
> on Linaro connect has been accepted. I wonder if you had already some
> plans about it. If you have no objections, I will prepare a patch. It should be
> quite simple, Linaru test-definitions work great and are very simple to
> understand.
> > https://connect.linaro.org/schedule/
> > One cool thing about Linaro test-definitons is that they work locally (just
> run the script), remotely (over ssh) and on LAVA (without requiring LAVA
> knowledge on the test developers).
> >
> > There are also some caveats. Some tests are hard to use in yocto-like
> filesystems because they use git clone or they require a native compiler on
> the target. But this should be easy to detect in Fuego by doing some
> prechecks on the board and aborting that test. Also, it would be nice if the
> tests were divided in predefined functions (e.g.: parse) so that we can skip
> them when we prefer to do that processing on the host side. I guess this will
> need discussion with Linaro test developers.
> 
> Some tests already have steps separated as shell script functions
> (example LTP: https://github.com/Linaro/test-
> definitions/blob/master/automated/linux/ltp/ltp.sh).
> It's not done for most of the test scripts though. I'm open to
> suggestions how we can improve on this aspect to make the test easier
> to share.

I think once we see what the issues are with tests that are not easily
usable/convertible between the two systems, that will help us
understand what things could be done to make things easier
to share (adding to a helper library, refactoring steps, etc.)
> 
> >
> > Now doing the opposite, that is reusing Fuego tests on a Linaro setup,
> would take a bit more effort. Something like this:
> > - add Fuego functions to Linaro's sh-test-lib but modify them to run in a
> local environment or to reuse functions existing in sh-test-lib
> >         - report
> >         - log_compare
> >         - is_on_target
> >         - assert_define
> >         - assert_has_program <-- could be changed to install_deps
> >         - put
> >         - get
> >         - etc..
> > - create a run.sh that concatenates various scripts
> >         - export Linaro parameters as Fuego environment variables
> >         - source fuego_test.sh <-- the Fuego test with functions for each phase
> >         - call test_run, test_processing.. (like main.sh does)
> >         - add python dependencies for running the parser on the target
> > - parser.py
> >         - deploy it together with the parser library (plib)
> >         - get the results as pass/fail/skip and a metric (Linaro only supports one
> metric)
> > [Note] an alternative is to substitute the parser by grep/awk as in the
> current Linaro test definitons
> 
> This sounds a lot like what LAVA does when running the test. Maybe it
> would make more sense to separate these 2 steps in fuego, so
> fuego/lava 'executors' can work on either test scripts?

Just a note here.  Fuego already has the test broken down into discreet
steps already.  ftc allows you to pick and choose the test operations
(we call them phases) to run.  The "run" and "parse" are two different
phases already.

However, the ability to only run a subset of phases is not widely used
at the moment. I'm pretty sure we'll discover issues as we use this
to accommodate different test definitions.  But that will be good to shake
out issues with the modularity.

> 
> >
> > I think that modularizing Fuego would make the above easier, and it would
> also help reusing Fuego components in other frameworks. Alternatively, we
> could just contribute to Linaro test definitions and use them directly.
> 
> I will have no problem accepting patches with changes to sh-test-lib
> if that makes your life easier :)

Thanks.  Likewise, I'm amenable to changes in Fuego core that make re-use of
LAVA tests easier also.
 -- Tim


More information about the Fuego mailing list