[lsb-discuss] unofficial LSB conference call minutes for 23 Apr 2014, post Collab 'New LSB' wiki point in time drop

Mats Wichmann mats at wichmann.us
Wed Apr 23 19:27:19 UTC 2014


On 04/23/14 11:28, chrubis at suse.cz wrote:
> Hi!
>>    Thus, the top-heavy TET-based workflow we've used up to now 
>> will be considered legacy. New work will focus on a very 
>> streamlined test API, that looks like the standard C/Unix API, 
>> with optional helper libraries that can mimic many of the 
>> features of the sophisticated test suites.  On top of this, we 
>> will develop frameworks that will be focused on making working 
>> with our tests easy: both on the test development side, and on 
>> the framework side. This will allow more effective selection 
>> of subsets of tests that are useful for more purposes than 
>> just a full certification, allowing integrating subsets of our 
>> tests with other people's workflows.
> 
> Maybe there is no need to reinvent the wheel. LTP has quite minimalistic
> test library that has:
> 
> * printf-like API for reporting test failures/errors/information
> 
> * tst_tmpdir() and tst_rmdir() to create/recursively remove
>   unique test temorary directory
> 
> * runtime kernel version detection
> 
> * tst_mkfs() function for format device with a filesystem
> 
> * safe macros that simplify error handling
> 
> * estabilished cleanup callback convention
> 
> * fifo based parent/child synchronization
> 
> * filesystem type detection function

all of which, of course, TET has also, without actually being
particularly top-heavy. Examples: you report failure (or success) by
giving a single value to tet_result(), and emit messages by calling
tet_printf() which has syntax just like printf, not exactly a heavy
burden. It also has many useful macros, cleanup startup/callback,
child/parent sync (including the very straightforward tet_fork() and
tet_wait()), and so on. I could just as easily ask why LTS didn't adopt
TET :)  Everybody has slightly different needs, and so there are many
test frameworks in the world.

For me (unable to attend the summit and so missing plenty of discussion
I suspect), the problem has always been that ease of use when running
tens of thousands of tests demands a framework so you can collect
information and report on it in a way that doesn't drive you insane.
And then, when something fails, and you need to ask a specialist in that
area, they want a standalone testcase they can not just run, but also
quickly tweak and recompile, without the fuss of setting up that
framework or having to deal with dependencies on headers/libraries that
belong to it.  SO you end up having to strip out all the stuff that
takes advantage of the framework.  And that costs /somebody/ time.  I'm
sure that's not vastly different for LTP either (which is a great
project I've followed for years, there is zero criticism here).

To deal with the problem that good tests just take a long time to
develop, some of the work that was done for LSB used some very clever
technologies from the forefront of research that essentially allows
generating tests from specification text with limited human involvement.
 Which is wonderful, but it led to test source code that was even less
readable for someone trying to debug an actual problem (see, for
example, the OLVER-Core test suite).

I don't know that there's a simple solution to the problem of different
audiences.  LSB has not been setting up testing that is part of one
specific project, but rather trying to perform after-the-fact testing -
two layers down. That is, upstream projects develop code, distros adopt
the code (and possibly adapt it, hopefully not at all or very little),
and then we see as a whole some commonality that could be codified.  How
do we know that the distros are all doing something compatible with
those upstreams that does not cause trouble for app developers? Testing.
But we can't really convince the upstreams to all develop LSB-ready
tests, because their goals are different, two levels down the stack, and
there's no criticism there either.

The proposed future-LSB seems more focused around what you could call a
regression test model - identify a problem, try to get agreement on a
common solution, and write test cases that confirm it. Not trying to
find ways to test a whole set of functionality that an application
developer would target. If that's accurate, it's a considerably lighter
testing burden. And I don't think test frameworks end up being a big
issue in that model as a result.


More information about the lsb-discuss mailing list