[Fuego] LPC Increase Test Coverage in a Linux-based OS

Victor Rodriguez vm.rod25 at gmail.com
Thu Nov 10 13:30:12 UTC 2016

On Wed, Nov 9, 2016 at 10:09 PM, Daniel Sangorrin
<daniel.sangorrin at toshiba.co.jp> wrote:
> Hi Victor,
>> -----Original Message-----
>> From: fuego-bounces at lists.linuxfoundation.org [mailto:fuego-bounces at lists.linuxfoundation.org] On Behalf Of Victor Rodriguez
>> Sent: Sunday, November 06, 2016 2:15 AM
>> To: fuego at lists.linuxfoundation.org; Guillermo Adrian Ponce Castañeda
>> Subject: [Fuego] LPC Increase Test Coverage in a Linux-based OS
>> Hi Fuego team.
>> This week I presented a case of study for the problem of lack of test
>> log output standardization in the majority of packages that are used
>> to build the current Linux distributions. This was presented as a BOF
>> ( https://www.linuxplumbersconf.org/2016/ocw/proposals/3555)  during
>> the Linux Plumbers Conference.
>> it was a productive  discussion that let us share the problem that we
>> have in the current projects that we use every day to build a
>> distribution ( either in embedded as in a cloud base distribution).
>> The open source projects don't follow a standard output log format to
>> print the passing and failing tests that they run during packaging
>> time ( "make test" or "make check" )
> Sorry I couldn't download your slides because of proxy issues but
> I think you are talking about the tests that are inside packages (e.g. .deb .rpm files).
> For example, autopkgtest for debian. Is that correct?


> I'm not an expert about them, but I believe these tests can also be executed
> decoupled  from the build process in a flexible way (e.g.: locally, on qemu,
> remotely through ssh, or on an lxc/schroot environment for example).

Yes , with a little of extra work in the tool path , for example some
of the test point to the binary they build
isntead of the one in /usr/bin for example. But yes, with a little of
extra work all these test can be decoupled

> Being able to leverage all these tests in Fuego for testing package-based
> embedded systems would be great.

Yes !!!

> For non-package-based embedded systems, I think those tests [2]
> could be ported and made cross-compilable. In particular, Yocto/OpenEmbedded's ptest
> framework decouples the compiling phase from the testing phase and
> produces "a consistent output format".
> [1] https://packages.debian.org/sid/autopkgtest
> [2] https://wiki.yoctoproject.org/wiki/Ptest

I knew I was not wrong when I mention about this Ptest during the conference

Let me take a look and see how they work

>> The Clear Linux project is using a simple Perl script that helps them
>> to count the number of passing and failing tests (which should be
>> trivial if could have a single standard output among all the projects,
>> but we don’t):
> I think that counting is good but we also need to know specifically which test/subtest
> in particular failed and what the error log was like.

Great , how do you push this to jenkins ?

What do you think about the TAP ?

If you could share a csv example, that will be great

Thanks Daniel

> Best regards
> Daniel
> --
> IoT Technology center
> Toshiba Corp. Industrial ICT solutions,
>> https://github.com/clearlinux/autospec/blob/master/autospec/count.pl
>> # perl count.pl <build.log>
>> Examples of real packages build logs:
>> https://kojipkgs.fedoraproject.org//packages/gcc/6.2.1/2.fc25/data/logs/x86_64/build.log
>> https://kojipkgs.fedoraproject.org//packages/acl/2.2.52/11.fc24/data/logs/x86_64/build.log
>> So far that simple (and not well engineered) parser has found 26
>> “standard” outputs ( and counting ) .  The script has the fail that it
>> does not recognize the name of the tests in order to detect
>> regressions. Maybe one test was passing in the previous release and in
>> the new one is failing, and then the number of failing tests remains
>> the same.
>> To be honest, before presenting at LPC I was very confident that this
>> script ( or another version of it , much smarter ) could be beginning
>> of the solution to the problem we have. However, during the discussion
>> at LPC I understand that this might be a huge effort (not sure if
>> bigger) in order to solve the nightmare we already have.
>> Tim Bird participates at the BOF and recommends me to send a mail to
>> the Fuego project team in order to look for more inputs and ideas bout
>> this topic.
>> I really believe in the importance of attack this problem before we
>> have a bigger problem
>> All feedback is more than welcome
>> Regards
>> Victor Rodriguez
>> [presentation slides] :
>> https://drive.google.com/open?id=0B7iKrGdVkDhIcVpncUdGTGhEQTQ
>> [BOF notes] : https://drive.google.com/open?id=1lOPXQcrhL4AoOBSDnwUlJAKIXsReU8OqP82usZn-DCo
>> _______________________________________________
>> Fuego mailing list
>> Fuego at lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/fuego

More information about the Fuego mailing list