[lsb-discuss] RFC: new LSB development infrastructure

Ian Murdock imurdock at imurdock.com
Fri Jul 14 18:45:20 PDT 2006


Initial thoughts on the first draft of the ISP RAS proposal:

Looks like a great start.

One thing we need to understand is the migration plan. Assuming the
database/infrastructure work begins in September and takes 6-9 months,
this work will overlap with most if not all of the LSB 3.2 development
cycle. So, we'll need to plan accordingly. Presumably, we'll want to
develop from a copy of the database and work up a procedure for either
reimporting the database when it goes live or updating the database
periodically to include any changes made in the live version (the
latter would be useful for ongoing testing against a real data set).

It may be a good idea to schedule the cutover to the new system for
after LSB 3.2, to minimize disruption during crunch time. If we do
this, however, we'll need to make sure we understand the testing
strategy for 3.2. One of my goals is greatly improved coverage
by 3.2, so I'd hate to not see the benefits of this till LSB 4.0.

I think I've got a pretty good grasp of how tests get written in the ISP
RAS framework, but I'm less clear on the test execution environment. One
thing we might consider here is keeping tet. It seems to do a fine job
at what it does, people mostly already know how to use it, and it would
allow us to easily incorporate our existing testsuites (the question
there is how we "link" them to the new infrastructure). (Note that
I'd like to spend a lot of time over the next few months thinking about
the user experience of testing, but that's a different discussion.)

I'm interested in coming up with as many use cases (e.g., the inotify
example from my email last week) as possible so we can start thinking
about the UI. I think that's going to be important. Ideally, everything
is on the web and fully integrated across all FSG systems
(certification system, Developer Network, build and test machines etc.).

For example, I'd like to see a system where once a distro is
certified it automatically gets tested on a regular basis, and we
provide distro vendors with a framework for testing development
versions of the distro in that same framework. This is obviously
a useful service not only for the end user (who can see
how various distros related to various versions of the LSB) and the
distro (who can test on ongoing basis and find problems
early), but it's also a goldmine of data for the information system.

Speaking of the information system, there's the issue of workflow. How
do the linkages between the interfaces, upstream libraries, and distros
get in there? Can we import some of it automatically (e.g., by
periodically scanning certain "tracked" distros, analyze the libraries,
crosslink everything, and track deltas over time)? Can we get the
distros to help us keep the system up to date, perhaps by linking it to
the certification system and giving them some value in doing so? Or do
we only import things manually when they hit our radar (say, the
equivalent of the current Futures tracker)? Obviously, the
more information we have at our fingertips the better, but there's
a cost to getting it there, and taken to its natural conclusion,
such an information system could be extremely detailed and complex.

That's it for now. More later.

-ian
-- 
Ian Murdock
317-863-2590
http://ianmurdock.com/

"Don't look back--something might be gaining on you." --Satchel Paige




More information about the lsb-discuss mailing list