[Openais] ipc rewrite take 2

Steven Dake sdake at redhat.com
Wed Apr 26 13:32:20 PDT 2006


On Wed, 2006-04-26 at 13:23 -0700, Mark Haverkamp wrote:
> On Wed, 2006-04-26 at 12:39 -0700, Steven Dake wrote:
> [...]
> > > 
> > > 
> > 
> > Mark
> > I found a bug in the way lib_exit_fn is called (or more appropriately
> > not called sigh) but I'm not sure how this causes the problem.
> 
> I may have seen this.  If aisexec ran long enough I'd start seeing 
> "ERROR: Could not accept Library connection: Too many open files"
> 
> > 
> > I have a question about the event service.  It appears that the event
> > service queues messages and sends a "MESSAGE_RES_EVT_AVAILABLE" to the
> > library caller.  Why not just send the full event in this condition?
> 
> There used to be (or still are) issues in that if the pipe between
> aisexec and the application was full, bad things would happen.   I think
> that it may have killed the connection to the application, but I can't
> remember for sure.  
> 

yes this still exists

> Another reason is that I needed to know when to send a lost event
> message to the application.  If the pipe was full I couldn't do that.  
> 

we can add a callback for this notification

> Also, I needed to be able to take back undelivered messages when a
> change in subscriptions and/or filters occurred.  
> 

we can add an ipc iterator for prioritized messages so that you can get
a list of every message and delete it if you desire.

> Also, I needed to be able to put later queued higher priority messages
> ahead of other messages when a delievery request came in. 
> 

this shouldn't be a problem.

> Also, when the queue was full, I needed access to the undelivered
> messages so that I could remove lower priority messages and replace them
> with higher priority ones.
> 

the ipc iterator should be able to address this issue.


Wow quite a list of requirements there :)  Thanks for the
requirements...  I think we can address all of these things in the
generic ipc queueing code but it will take some (alot) of work on my
part.

I've attached a new patch which fixes the evt service to run in my
environment.

Could you try it?


> > 
> > One problem I see is that events can sit in the event queue until a "new
> > event" is sent which triggers a flush of the events.  If this is
> > correct, the code should instead always try to flush events only when
> > the output queue to the library is available for writing.  This keeps us
> > from blocking.  Here is a scenario could you tell me if it happens?
> > 
> > Applications writes 10000 events which are all queued up.  Last event
> > that is queued then triggers one read of the event from the dispatch
> > routine.  Then several hundred events sit around waiting for another
> > event publish to cause a flush.
> > 
> > This code looks wrong:
> > inline void notify_event(void *conn)
> > {
> >     struct libevt_pd *esip;
> > 
> >     esip = (struct libevt_pd *)openais_conn_private_data_get(conn);
> > 
> >     /*
> >      * Give the library a kick if there aren't already
> >      * events queued for delivery.
> >      */
> >     if (esip->esi_nevents++ == 0) {
> >         __notify_event(conn);
> >     }
> > }
> > 
> > It would appear to notify only when nevents == 0.  Hence there is one
> > notification to the api, but there could be many events that should be
> > read...
> 
> That notification primes the pipe when there are no others.  If others
> are added to the queue before the application requests an event,
> lib_evt_event_data_get will send another notification. 
> 
> > 
> > I'd prefer to rework all of this so that the IPC layer does all the
> > queueing operations.  We could easily add priorities to the IPC layer.
> > We could add a callback to the service handler.  If it is defined, it
> > will be called when the queue is nearly full (and a dropped event should
> > be sent and future library events should be dropped by the exec/evt.c)
> > or when the queue is available because it has flushed out a bit.  If the
> > callback were defined to NULL, the library connection would be entirely
> > dropped as happens now with the other services.  We could also make the
> > size of the queues for each service dynamically settable (so some
> > services like evt, evs, cpg could have larger queues and other services
> > like amf, ckpt could have smaller queues to match their needs).
> > 
> > Then the event service could write all events to the dispatch queue if a
> > subscription is requested.  This would reduce IPC communication and
> > context switches as well.  All flow control and overflow conditions
> > would be handled by the IPC layer in ipc.c.
> > 
> > What do you think about such a method?  This would simplify the event
> > service quite a bit and put all of the ipc in one place.
> 
> That sounds great as long as I can still maintain the event service
> protocol as it is specified to work. 
> 
> 

ok I remember having this conversation before :) this again isn't
critical for now but something to keep in mind for later work...

> > 
> > Regards
> > -steve
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ipc-rewrite-2.patch
Type: text/x-patch
Size: 75165 bytes
Desc: not available
Url : http://lists.linux-foundation.org/pipermail/openais/attachments/20060426/929a823e/ipc-rewrite-2-0001.bin


More information about the Openais mailing list