[agl-discussions] First push for Audio Agent code

Fulup Ar Foll fulup.arfoll at iot.bzh
Sat Mar 11 01:10:31 UTC 2017


Hi,

As promised during last Audio/Graphic call. I pushed my ongoing work for 
AGL audio agent onto https://github.com/iotbzh/audio-bindings

The Audio Agent relies on a 3 layers architecture:

* Lower level Layer does the interface with ALSA. This layer is pretty 
strait forward to create, even if poorly documented ALSA API is well 
known and relatively simple to understand. I mostly reused an AlsaJson 
gateway that I made in a previous life to support professional music 
sound card 
http://breizhme.net/alsajson/mixers/ajg#/scarlett?card=hw:18i8 . AGL 
uses the same JSON/REST mapping, also reusing my old code was not too 
difficult. Nevertheless I had to create some new code to support 
asynchronous event notification and UCM(UseCaseManager) that was not 
supported by AlsaJsonGateway . I still have few days of work on this 
layer. Unfortunately next week I travel to Germany and you may have to 
wait one more week before I publish a fully working version.

* HAL (Hardware Abstraction Layer): As today 4 controls and IntelHDA (my 
desktop soundcard) are provided as a sample. This part is not to 
complex, even if it requires sound cards introspection and alsa/numid 
maps with equivalent HAL/controls. Note that some boards (especially 
expensive ones) may have very specific behaviours that might be hard to 
abstract, ALSA does not provide any real normalisation and more or less 
drivers can do what ever you want. For HAL abstraction I used the same 
declarative model that we already have within Bindings for APIs. For the 
vast majority of  ALSA sound card, the HAL should remain very simple to 
write even if in vehicle devices we may have some not standard ALSA 
capabilities (ie: volume:up/down, fader, routing, ..). For non ALSA 
board or custom controls (ie: audio:on/off, routing, ....) I reserve a 
per control slot for dedicated callbacks. Note that in order to make 
those call back simpler to write, I added helpers to enable synchronous 
API subcall (by default AGL binders are asynchronous).

* AudioLogic provides a high level business API for application (as 
today almost empty). This layer should provide application portability. 
As today the "Audio Logic" is just a skeleton. While some high level 
controls are obvious (ie: volume up/down) I did not find any existing 
APIs I could copy or at least inspire from. I did not find application 
audio APIs on Genivi Wiki and W3C audio API focus almost exclusively on 
mapping low level C library to Javascript which is out of scope. If 
someone has something that he would like to contribute please let me know.

=== WORK STATUS: While they are still significant holes in current 
implementation, the general logic of the audio agent works. Especially 
all the asynchronous signalling coming from ALSA are now reported as 
standard AGL signals. The code should compile without any trouble on any 
recent Linux distribution (I personally use OpenSuse-42.2) and partial 
functionalities works.

-- Easy On Going Work (should be ready before next F2F) --

* Add SET+UCM control onto the ALSA layer

* Complete the AudioLogic layer to a level where anyone may play with it

* Interface few other ALSA board (at least MyLaptop & RenesasGen3) to 
make sure the HAL model is flexible/versatile enough

* Do the testing to make sure that all implemented function works as 
expected.

-- Not so Easy On Going Work --

* Do the integration with MOST audio network. Microchip driver+Unicens 
daemon are moving to a new version with significant changes and I'm 
reluctant to spend time to port the old version. While it is clear that 
MOST is integrable into this model, it requires some special skill and 
knowledge about MOST/Unicens that I do not have.

* Integrate Pulse. This should not be too complex. Pulse API is cleaner 
that ALSA and somehow easier to understand. The question is more: how 
far do we want to go with Pulse integration?.

* Take a decision on "what should AGL audio API look like?" For CAN 
agent is was somehow simpler because we had both OpenXC and Volkwagen 
APIs. Unfortunately we do not have this chance with audio. While we do 
not need to freeze the audio API in stone, we should at least define it 
in such a way that we can implement the user experience we want in our 
next generation of demos.

Hopefully the README should provide enough information to compile it 
directly onto any recent Linux platform. At least it should allow you to 
play with low level API and HAL.

Fulup



More information about the automotive-discussions mailing list