[Fuego] LPC Increase Test Coverage in a Linux-based OS
daniel.sangorrin at toshiba.co.jp
Thu Nov 10 03:09:42 UTC 2016
> -----Original Message-----
> From: fuego-bounces at lists.linuxfoundation.org [mailto:fuego-bounces at lists.linuxfoundation.org] On Behalf Of Victor Rodriguez
> Sent: Sunday, November 06, 2016 2:15 AM
> To: fuego at lists.linuxfoundation.org; Guillermo Adrian Ponce Castañeda
> Subject: [Fuego] LPC Increase Test Coverage in a Linux-based OS
> Hi Fuego team.
> This week I presented a case of study for the problem of lack of test
> log output standardization in the majority of packages that are used
> to build the current Linux distributions. This was presented as a BOF
> ( https://www.linuxplumbersconf.org/2016/ocw/proposals/3555) during
> the Linux Plumbers Conference.
> it was a productive discussion that let us share the problem that we
> have in the current projects that we use every day to build a
> distribution ( either in embedded as in a cloud base distribution).
> The open source projects don't follow a standard output log format to
> print the passing and failing tests that they run during packaging
> time ( "make test" or "make check" )
Sorry I couldn't download your slides because of proxy issues but
I think you are talking about the tests that are inside packages (e.g. .deb .rpm files).
For example, autopkgtest for debian. Is that correct?
I'm not an expert about them, but I believe these tests can also be executed
decoupled from the build process in a flexible way (e.g.: locally, on qemu,
remotely through ssh, or on an lxc/schroot environment for example).
Being able to leverage all these tests in Fuego for testing package-based
embedded systems would be great.
For non-package-based embedded systems, I think those tests 
could be ported and made cross-compilable. In particular, Yocto/OpenEmbedded's ptest
framework decouples the compiling phase from the testing phase and
produces "a consistent output format".
> The Clear Linux project is using a simple Perl script that helps them
> to count the number of passing and failing tests (which should be
> trivial if could have a single standard output among all the projects,
> but we don’t):
I think that counting is good but we also need to know specifically which test/subtest
in particular failed and what the error log was like.
IoT Technology center
Toshiba Corp. Industrial ICT solutions,
> # perl count.pl <build.log>
> Examples of real packages build logs:
> So far that simple (and not well engineered) parser has found 26
> “standard” outputs ( and counting ) . The script has the fail that it
> does not recognize the name of the tests in order to detect
> regressions. Maybe one test was passing in the previous release and in
> the new one is failing, and then the number of failing tests remains
> the same.
> To be honest, before presenting at LPC I was very confident that this
> script ( or another version of it , much smarter ) could be beginning
> of the solution to the problem we have. However, during the discussion
> at LPC I understand that this might be a huge effort (not sure if
> bigger) in order to solve the nightmare we already have.
> Tim Bird participates at the BOF and recommends me to send a mail to
> the Fuego project team in order to look for more inputs and ideas bout
> this topic.
> I really believe in the importance of attack this problem before we
> have a bigger problem
> All feedback is more than welcome
> Victor Rodriguez
> [presentation slides] :
> [BOF notes] : https://drive.google.com/open?id=1lOPXQcrhL4AoOBSDnwUlJAKIXsReU8OqP82usZn-DCo
> Fuego mailing list
> Fuego at lists.linuxfoundation.org
More information about the Fuego