[lsb-discuss] overview of LSB infrastructure proposal (LSB conference call minutes, Tuesday, August 22, 11am ET)

Ian Murdock imurdock at imurdock.com
Tue Aug 29 07:55:38 PDT 2006


Attendees: Rajesh Banginwar, Darren Davis, Todd Fujinaka, Marvin Heffler,
Andrew Josey, Jeff Licquia, Ian Murdock, Steve Schafer, Aaron Seigo, Kay
Tate, Mats Wichmann

The bulk of the call was spent discussing the LSB infrastructure proposal.
Here is an overview for those who did not attend the call:

* The FSG is engaging with the Institute for System Programming
at the Russian Academy of Sciences (ISP RAS) to develop
the next generation of the LSB test and database infrastructure.

* The new system will be an extension of the current system, namely
the existing LSB database, the existing LSB test suites, and the
existing LSB test harness/test result file format (tet). It is an
explicit goal to allow for easy import and export of tests. In other
words, we want to be able to easily add upstream tests to the framework
with minimal effort (perhaps adding calls to provide feedback to test
harness etc. but little else), and we want to be able to easily extract
tests if an upstream is interested in maintaining them. It is also
an explicit goal to make it easy for external parties to help maintain
the database (distros, upstream developers, etc.), and to
provide enough value to incent them to do so (regression testing etc.).

* The main goal is to improve the coverage of the LSB testsuites.
This one is pretty self-explanatory. The LSB is an interface
standard, and an interface standard is only as good as its tests.

* Another goal is to make it easier to run the tests and ultimately
get certified. It's too difficult to do that today.

* The final goal is to put a better infrastructure in place for
evolving the LSB going forward. In short, how do we track the
continually evolving open source world (independently evolving
distros, upstream components, etc.)? How do we coordinate
the diverse group of LSB participants, keeping
in mind the diverse needs of distros, ISVs, and upstreams?

As an example, let's look at a situation that came up recently: a
request to add inotify to the LSB. Our first task was to determine
whether inotify is "best practice", which roughly means that it ships in
all major Linux distributions. inotify was added to glibc 2.4. Current
generation RHEL and SLES (RHEL 5 and SLES 10) are based on glibc 2.4,
but current generation Debian and Ubuntu (etch and dapper) are still
based on glibc 2.3. So, inotify is not yet shipping in the current
generation of all the major Linux distributions, and thus is not a
candidate for inclusion in LSB 3.2, which tracks the current generation.
However, it is reasonable to assume that the next generation
will be based on glibc >= 2.4. Thus, inotify *is* a candidate for LSB 4.

We currently go through a similar process every time any request is
made to add an interface to the LSB. Our current process is largely
ad-hoc and, thus, error prone. Ideally, our systems would allow us to
easily start with an interface and, e.g.,
determine whether it is present in all the major Linux distributions.

Once we've determined the "best practice" status of a particular
interface, there are associated project management issues: How do we
record the status in a way that's easy to make actionable? What are the
standard facilities for adding the interface to the LSB, i.e., what will
the specification assert about the interface, how do we ensure that those
assertions have adequate coverage in the test suites, and how do
we keep the assertions and the tests in sync over time? In short, you
can think of this as providing a standard mechanism that workgroup
developers can use to build specifications and testsuites. The other
FSG workgroups will use these same mechanisms to ensure the
standards they produce fit seamlessly into the overall LSB framework.
Note that the Futures tracker gets incorporated in here as
well, and we no doubt want integration with the LSB bugzilla as well.

A related issue is how to communicate the decisions we make--i.e.,
what happens when the next person wonders about the status of inotify?
Ideally, there would be some easy way for developers to determine
whether an interface is 1. current included; 2. planned for inclusion
with expected timeframe); or 3. deprecated. This should be tightly
integrated with the LSB validation tools and LSB Developer Network, as
it's a natural part of a developer determining whether an application is
compliant and which interfaces are truly portable across distributions.
Imagine a version of lsb-appchk that looks into the LSB database to
provide more guidance as to the status of unsupported interfaces
(perhaps suggesting alternatives) or typing a function name in the
search box at http://www.freestandards.org/en/Developers and getting
not just the man page for the interface but also the current status
of the interface in the LSB, and maybe even RHEL, SLES, Debian, etc.

* The key technique will be to link all major LSB artifacts:

- specification, which defines the requirements of LSB interfaces
(at assertion granularity, e.g., "interface X must do Y" and "interface
X must not do Z" are two different assertions related to the same
interface);

- upstream components (e.g., glibc), which implement the LSB interfaces;

- distributions, which are collections of upstream components
at various points in time (and which define which
interfaces are candidates for inclusion in the specification);

- tests, that check whether the assertions in the specification are
true in a particular implementation;

- documentation of the interfaces for developers;

- certification and regression testing, to keep all LSB implementations
aligned

* To develop the new system, we are taking a three phase approach:

Phase 1 involves extending the existing LSB database to allow
assertions to be marked in the specification; interfaces to be
linked to upstream components; upstream components (and
versions of those components) to be linked to the distros that
provide them, etc.; and also building a user friendly interface
for browsing and searching the database, with visualization and analysis
capabilities. Phase 1 also includes work to improve the test execution
framework. This can be thought of as the "building the foundation" phase.

Phase 2 involves importing the existing LSB tests. Essentially,
this involves going through the existing pile of tests,
extracting the assertions tested, and linking them to the
specification using the framework built in phase 1.
The proposal calls this "building the assertions catalog".

Phase 1 and phase 2 will be complete by mid-2007. After phase 1,
all new specifications/tests will be developed in the new framework.
After phase 2, all existing tests will have been imported into the new
framework as well. We intend for the new LSB infrastructure to be fully
operational and in production use by the release of LSB 3.2. Prototypes
should begin to come online by the end of 2006, and the required
markup/APIs for annotating assertions in new interfaces added to LSB 3.2
will be available earlier than that, so the LSB workgroup can begin
putting the information in the new framework as quickly
as possible, though the framework may not be fully operational yet.

Phase 3 runs from mid-2007 to mid-2008, or roughly to the release of
LSB 4.0. Phase 3 involves using the new infrastructure to improve test
suite coverage. Here, results scale linearly with the number of people
working on tests. We plan to employ a two prong approach: Going broad
(which does little aside from testing that an interface is present and
that it doesn't fail catastrophically given proper input); and going
deep (which tests certain key/widely used interfaces as thoroughly as
possible). Again, the final results will depend primarily on the amount
of people we have working on the tests during phase 3, but our
goal given current manpower estimates is 75% broad coverage and
10-15% deep coverage by LSB 4.0. The more resources we
have during phase 3, the better our coverage will be by LSB 4.0.

Questions:

* How do we incorporate interfaces provided by reference? I.e.,
some of the interfaces are not described in the LSB proper, but
rather are described in external specifications the LSB simply
refers to. In this case, how do we attach test cases to those
interfaces, given that the interfaces aren't in the LSB
database? Rajesh will follow up with a message to lsb-discuss.

* How do we handle functional testing? I.e., there are modules
in the LSB where interface testing is insufficient, and where
we have to test for functionality instead. For example, libpng
is tested by processing example images, then comparing them with
the expected result. How will the new infrastructure handle this?
Rajesh will follow up with a message to lsb-discuss here as well.

-ian
-- 
Ian Murdock
317-863-2590
http://ianmurdock.com/

"Don't look back--something might be gaining on you." --Satchel Paige




More information about the lsb-discuss mailing list