[Fuego] Fuego test philosophy (was RE: [PATCH] Add test cases for commands of busybox as follows: ash bunzip2 free wget)

Tim.Bird at sony.com Tim.Bird at sony.com
Wed Jun 6 19:07:53 UTC 2018

> -----Original Message-----
> From: Hoang Van Tuyen
> Dear Wang Mingyu and all,
> I have a confusion about the method for adding tests for testing busybox.
> Actually, the test code for many busybox commands have existed into
> testsuite folder
> of busybox source code. You can see that at:
> https://github.com/mirror/busybox/tree/master/testsuite
> We can build the tests from upstream source, then install and run the
> tests in our target board.
> I see the tests have been wrote very detail and I think we can reuse
> them instead of
> writing them from scratch by our-self.
> Could you let me know what do you think?

I'll give my opinion here.  Sorry for the long response, but I'm going
to describe a bit of my philosophy along with the response to this specific

There is indeed a large test suite existing, inside the busybox source repository,
for testing busybox features.   When Wang proposed his busybox test suite,
I took a look at the existing suite.  I did not see an easy way to use it with Fuego,
because several of the tests check the busybox configuration in order to do
their own "dependency checking".  I am unwilling to impose a requirement on
Fuego that the configuration file for busybox be available in order to run tests
against busybox.  Part of Fuego's model is that you can run it independent of
the development environment for the distribution of Linux you are testing.
I think it is important to preserve this separation between the test environment
and the distribution build environment.  Many QA engineers, and most
end users, will not have easy access to the build artifacts for the distribution
of the device they would like to test.

There were also a few other issues with the tests in the existing
busybox test suite.  In some places they assumed attributes of
the testing environment that are not applicable to Fuego.

So, to use that suite of busybox tests would require some work to
integrate it with Fuego. It may be a small or a large amount of work.
It's hard to say without further investigation.  Unfortunately, I do
not have time at the moment for this investigation myself.

My attitude with respect to Wang's tests is that they are not nearly as
comprehensive as what is already in the busybox test suite, but they
are available now, and Wang is willing to perform the work needed to
integrate them into Fuego.

I don't think it is necessary to restrict ourselves to only one test of a
a given functionality or feature.  For instance, Fuego has many file system
tests, and users of Fuego can select which one meets their testing needs.
If someone wants to contribute a Fuego test for the existing busybox
test suite I would be very happy to help them integrate it, and include
it in Fuego.  But I don't feel like I should discourage Wang from
contributing Fujitsu's work in the meantime.

There are pros and cons to this approach:  Wang's tests are not maintained
by an external, upstream party.  In general, that is preferred over writing
and maintaining our own set of tests.  Also, tests written by the busybox
developers are more likely to reflect regression testing of issues that those
developers saw.  However, this must be weighed against the fact that there
may not really be any upstream maintainer for the busybox test suite.
 Some of the tests in the upstream busybox test suite have not been
maintained in many years.

There is a great danger, for tests that exercise very basic
functionality, that they will create more noise (in the way of false failure
reports) than real, useful bug reports.  This is because the process of
generalizing a test (making it run in a cross environment, on multiple machines,
in a wide variety of conditions and setups) often reveals more bugs in the
test itself than it does in the system being tested.  This is especially true
for systems like the core of the Linux kernel, which gets very extensive
end-user testing.  It is unlikely that Fuego (or any other test system) is
going to find a major bug in the "read" system call, before it is encountered
and fixed by the kernel user and developer community.

In general, Fuego is striving for a test meritocracy system, where tests
can be rated based on their value (how many bugs they find, ease of
use, understanding of results, etc.)  We are not at this point yet, but
part of the vision of Fuego is to create a "test store", where users can
upload and download tests, comment on tests and rate them, and
use that for developers to judge which tests to use for their projects.
Part of that is allowing people to (sorry for the English saying) "throw
mud at the wall and see what sticks".  It is in this spirit that I allow
people to contribute tests to Fuego.

Now, there is also a danger that the presence of a test, even if it
is sub-standard (I'm not saying that's true of Wang's test), will
deter people from working on a better version of a test.  But the
open source principle is that people have the option to work on
whatever they think will be most beneficial to themselves, with
the side effect that it ends up benefiting others as well.  Fuego's
tests are decoupled sufficiently from each other that I don't see
Wang's busybox test as interfering with the development
of some other busybox test based on the upstream code.
I would be happy to see both.

Now, finally, my own view is that a "core utilities" test would be
more valuable than a busybox-specific test in the first place.
The POSIX test suite covers some of this.  But what you are testing
with a busybox test usually boils down to a test of the expected
behavior of a set of core Linux command line utilities, usually
with a goal of POSIX conformance.  I see no
need to restrict this to testing busybox.  It could just as easily apply
to toybox, or some other set of utilities (e.g. android toolbox,
although that's on its way out).  The way Wang's test is written
now, it hardcodes a bunch of references to 'busybox'.  That is
probably to make sure that the actual busybox command is tested,
rather than the "big" version of the command, which might also
be available on a system.  I don't recall whether this is also true
of the upstream busybox test suite or not.  But IMHO it would be
better if it avoided these hardcoded busybox references.

Ok - sorry for the long message.  The TL;DR is that If someone wants to
contribute a Fuego test for the upstream busybox test suite, I would
be happy to see it, and very willing to assist them in integrating it
into Fuego.
 -- Tim

More information about the Fuego mailing list