[lsb-discuss] RFC: new LSB development infrastructure

Ian Murdock imurdock at imurdock.com
Thu Jul 6 19:55:05 PDT 2006


A big focus of the FSG over the next 6-9 months is putting the
proper infrastructure in place that will allow the LSB workgroup
to effectively track the continually evolving open source
ecosystem as well as to coordinate the activities of a diverse
group of participants, including distribution vendors, upstream
component developers, and ISVs (independent software vendors).

The first priority of the infrastructure project is improving test suite
coverage. After all, the LSB is an interface standard, and an interface
standard is only as good as its testsuites. As a result, we are
exploring a partnership with the Institute for System Programming of the
Russian Academy of Sciences (ISP RAS), which has significant
expertise in systems testing and has been working in the area of LSB
Core 3.1 testing for some time (see http://linuxtesting.org/).

Beyond testing, we also anticipate adding analysis facilities that
will make it easier to make decisions about what we can add to
the LSB without going beyond any of the major distributions and,
thus, making it difficult for one or more of them to be compliant.
For example, it would be useful to be able to answer questions
such as, "What version(s) of glibc include the function
inotify, and how do those version(s) map to the major distros?"

(Note: This is an actual question that came up at the LSB f2f. We've
had several requests to add inotify to the LSB, but inotify was only
added in glibc 2.4, which will ship in both RHEL 4 and SLES 10. However,
Debian and Debian derivatives, including Ubuntu, ship with glibc
2.3. Thus, inotify was deferred to LSB 4.0. Had we added inotify
to LSB 3.2, it would not have been possible for the
current generation of Debian and Ubuntu to be LSB 3.2 compliant.)

The general idea is to augment the existing LSB database with
additional information, thereby explicitly linking the interfaces
in the LSB with that information. One goal of this linkage is to
systematize various aspects of LSB development (e.g., to make
it harder to add an interface without tests, or at least
make it easier to identify which interfaces don't have tests).

Another goal is to make it easy to follow various threads throughout the
system for analysis or visualization purposes--for example, mapping an
interface with a certain ABI to a specific version (or range of
versions) of the upstream component from which it came, which in turn
can be mapped to the distributions that include that component (e.g.,
inotify -> glibc 2.4 -> RHEL 4 and SLES 10 but not Debian or Ubuntu).

As it turns out, the ISP RAS system works very similar to this--in
other words, the specification and tests are explicitly linked.
In summary, each assertion in the specification (which they
call "atomic requirements") is given an id, which is then
used to generate a test template that covers each assertion
that is subsequently filled in by a test engineer in
a variant of C specifically designed for writing test cases.

Because each test is linked not only to the interface it tests but to
the specific assertion in the specification that it tests, it is
possible to easily determine the coverage of a particular testsuite.
Results are presented as an overlay to the specification text in
a very intuitive manner--assertions in the specification that
are tested are in green, and assertions that are not are in red.

They've also built a system for coverage testing of upstream
components, e.g., glibc. Again, results are web-based--the source code
of the component is displayed, where the code that is tested has been
highlighted. This system has been already found at least one glibc bug.

It's pretty easy to imagine how the specification/test linkage could
be extended further. For example, rather than reporting test failures
with arcane messages like "/tset/LSB.os/genuts/nftw/T.nftw 9
FAIL", each failed test could display the assertion that failed
directly from the specification, with the text a
hyperlink to the appropriate place in the specification itself.

It's also pretty easy to imagine how a similar mechanism could be
used to link the specification with other information as described
earlier, such as which version of which package supplies the interface,
and which distributions ship which versions of those packages etc.

All in all, I'm very impressed with the system, though I do have a
concern or two. The first concern centers around the generation of
the test templates--this appears to be a one-off process, so it's
not clear how easy it is to change the tests to match a change
in an interface. (Then again, this may not be a common occurrence.)

Second, it's not clear how easy it is to write tests for the
ISP RAS system. Ideally, it should be something that anyone
familiar with the LSB should be able to learn in a few hours.

Third, it's not clear how the new framework would fit in
with our existing tests. Ideally, we'd be able to
import the existing tests into the new framework somehow.

A related question is what's the path from point A to point B: In
other words, how do we get from where we are today (the LSB
database in its current form with separately maintained tests) to an LSB
database with linked tests and other important bits of information
as well as a robust set of analysis/visualization
tools, and how do we do it in the least disruptive manner possible?

Thoughts? Any feedback on the ISP RAS framework for those who have
looked at it? Any thoughts on what kinds of information we should be
adding to the LSB database for analysis purposes? Your feedback
will be used to draft a development plan for the new system.
I'll be weighing in with some potential use cases
tomorrow, but for now, I'd like to get the discussion going.

-ian
-- 
Ian Murdock
317-863-2590
http://ianmurdock.com/

"Don't look back--something might be gaining on you." --Satchel Paige




More information about the lsb-discuss mailing list