[lsb-discuss] PulseAudio and LSB 4.0 I think it should be pulled completely.

Peter Dolding oiaohm at gmail.com
Thu Jul 24 23:04:32 PDT 2008


On Fri, Jul 25, 2008 at 2:11 PM, Jeff Licquia <jeff at licquia.org> wrote:
> Peter Dolding wrote:
>>
>> Also they are not on road map.   Bad logic has entered the road map
>> design.
>
> Let me say more directly what I think Ted is trying to say more
> diplomatically.  You say our roadmap has bad logic, so give us the better
> logic!
>
> The road map can be updated to reflect better information, but we have to
> have that information first.
There are basically classes of things.

phonon from kde is a wrapper that is over gstreamer helix and others.
 This is a upper level wrapper over the media processing engines.

Then you have stuff like gstreamer that are codec processing engines
these also act as a wrapper over the audio subsystems under them.  So
they are also code once.

Now the bit that is not included in the map of Linux Audio world some
how got missed and is the important bit.   Is the wrappers over raw
output systems.  They simplify the audio system as well as rendering
application normally platform independent.  Better for the developers
to be using normally can be embedded into there applications without
issue.  http://www.xiph.org/ao/  is one of the broader ones.   You
also find these system embeded in game engines and the like.   Due to
the wrappers embeding nature we really don't even have to include them
as files for distributions to provide.  Of course the LSB keeping a
list of the recommended and the nicest ones out there for third party
developers would be kind of us.  Nicer if provided with a common
configuration but not really required.  This class is what developers
truly need if they don't like alsa api.  Also note most of the stuff
above here also started life as a wrapper gstreamer over time grew
more complex.

Everything listed so far does not give a rats if everything uses there
interface or not.  Also they don't run services they are just
libraries.   They don't redirect sound traffic into themselves its all
one way transferring to output or in from input.  Also everything
above here if it crashes or malfunctions normally only effects a
single application at a time or only a segment of applications at a
time(ie there common configuration files).

Then you move into the evils of sound servers.  History of sound
servers started because early OSS could only play sound from 1
application at a time.  This has been fixed for a long time.  Yet the
idea of a sound server being key keeps on living on.  Stuff like
pulseaudio it wants to control all sound output from the computer.  At
this level we have lots of fragmentation between competing sound
servers.    This level comes with a price when you have a sound server
running you must have a sound server process that is a mixer that does
do special things and does cost cpu and ram resources.   Since
everything is now going threw that one process if that process locks
up dies/what ever sound output is stuffed.

Its simple stability.   We have 1 sound server process in ALSA and
another you just have added a extra point of failure that can cripple
the complete sound output of the computer.  Every time you add a sound
server that is demanding that everything comes threw it you are
destroying stability.

How adding pulseaudio or any extra sound server adds complexity
without gain to development is quite simple.   I build my application
for ALSA.  Pulseaudio redirect ALSA and for some reason the feature I
used does not work in Pulseaudio.  I contact Pulseaudio and get told I
should write a driver using Pulseaudio instead of using direct to
Alsa.  Now I have to take care of two audio interfaces for a single
platform.  Then some need my application on windows without pulseaudio
running now 3 drivers.  Then someone want it threw NAS sound server
instead of pulse so now I have 4 driver and the list just keeps on a
growing.  Soon as a application developer you have build your own
version of a wrapper like libao.  You might as well skip the hell and
go straight to wrapper level.

 LSB is not about destroying stability and added complexity or it should not be.

As the LSB we need to be trying to get rid of all these different
API's at the sound server level.   Reducing the complexity of
wrappers.
Approving pulseaudio is nothing more than giving a green light to
others to try to push there sound server into LSB adding another api
making live harder and harder and harder for developers.

Worst bit every time something has to be processed between my
application and the sound card adds lag.  There is no such thing as
100 percent lag less sound server.

Good logic says stay out of the sound server area.  Its not a section
of the Linux audio framework that should be left alive with its own
API's.   Either they plug correctly into Alsa if they want all audio
from the system or the plug correctly into phonon if they are like NMM
that just wants a section.  Or a bare min the plug into the LSB
approved wrapper.   So application developers don't have to give a
single stuff about sound servers.  This way we are not demanding
distribution makers have anything that can make there system unstable
either any more than the bare min at least.

Focus need to move to the more critical wrapper levels.  Like full
blown media frameworks like gstreamer and xine backed and phonon.

Peter Dolding

PS I missed the driver level.   Really its something you cannot avoid.
So not bothering explaining it.



More information about the lsb-discuss mailing list