[Openais] [LCK] exec/lck.c lock_algorithm question
Pascal Bouchareine
pascal at gandi.net
Fri Sep 14 06:06:54 PDT 2007
Steven,
Thanks for your reply
Attached is the "test_lock" bit of code, that acquires
an exclusive lock and sleeps for a while. I'll synchronize
time on the "vertical axis" :)
The following is the current behaviour:
- client A - - client B -
sh# /var/tmp/test_lck sh#
ResourceOpen 1 sh# /var/tmp/test_lck
lock status is 1 ResourceOpen 1
(.. sleeps ..) ( .. locks forever .. )
ResourceUnlock 1
ResourceClose 1
LckFinalize 1
I was expecting something like:
- client A - - client B -
# /var/tmp/test_lck
ResourceOpen 1 # /var/tmp/test_lck
lock status is 1 ResourceOpen 1
(.. sleeps ..) ( .. waits .. )
(.. sleeps ..) ( .. waits .. )
ResourceUnlock 1 lock status is 1
ResourceClose 1 ( .. sleeps .. )
LckFinalize 1 ResourceUnlock 1
ResourceClose 1
LckFinalize 1
This is what I get with the said patch, thought I'm not
sure I'm not breaking something else.
I have another question btw - the client-side locking code
in lib allocates a new fd (dummy_fd/lock_fd) for lock requests
and closes the said fd -- this leads to a POLLHUP and lck_exit_fn
called, and maybe orphan locks ?
On Thu, Sep 13, 2007 at 03:26:40PM -0700, Steven Dake wrote:
> Pascal,
>
> The lock service is very experimental at the moment and just meant to
> use in a development mode and for developers to continue its evolution.
>
> If you can send me a short test case which demonstrates the issue along
> with your desired behavior, I can a) tell you if the lock service is
> broken in this regard and fix it and b) tell you if your expected
> behavior matches the SA Forum specifications.
>
> Also I would suggest using trunk since 0.81 has some broken
> functionality wrt the lock service.
>
> Regards
> -steve
>
> On Fri, 2007-09-14 at 00:03 +0200, Pascal Bouchareine wrote:
> > Hi all,
> >
> > I'm new to this and try to read the whole stuff right now.
> >
> > I'm having problems using the LCK service and got confused
> > by the lock_algorithm of exec/lck.c:
> >
> > if ex lock granted
> > if ex pending list has locks
> > send waiter notification to ex lock granted
> > else
> > (..)
> >
> > And the following :
> > {
> > ..
> > if (resource->ex_granted) {
> > /*
> > * Exclusive lock granted
> > */
> > if (resource_lock->lock_mode == SA_LCK_PR_LOCK_MODE) {
> > lock_queue (resource, resource_lock);
> > }
> > } else {
> > ..
> > }
> > }
> >
> > When one process holds an exclusive lock on the ressource,
> > the next one to EX_LOCK is not notified since it's not
> > added to the wait list, and hangs.
> >
> > Replacing the ex_granted case with an unconditionnal lock_queue,
> > disregarding the requested lock_mode, I get a better behaviour,
> > but is this the correct one ?
> >
> > Thanks,
> > Pascal
> >
--
\o/ Pascal Bouchareine - Gandi
g 0170393757 15, place de la Nation - 75011 Paris
More information about the Openais
mailing list