[Ksummit-discuss] [TECH TOPIC] asm-generic implementations of low-level synchronisation constructs

Peter Zijlstra peterz at infradead.org
Thu May 8 14:27:34 UTC 2014


On Thu, May 08, 2014 at 11:13:12AM +0200, Peter Zijlstra wrote:
> ATOMIC_RET(ptr, __ret, stmt)
> ({
> 	typeof(*ptr) __new, __val;
> 
> 	smp_mb__before_llsc();
> 
> 	do {
> 		__val = load_locked(ptr);
> 		stmt;
> 	} while (!store_conditional(ptr, __new));
> 
> 	smp_mb__after_llsc();
> 
> 	__ret;
> })

So the most common constraint (which you've confirmed is true for ARM as
well) is that we should not have memory accesses in between an LL/SC.

Making sure GCC doesn't do any is tricky, the best I can come up with is
tagging all variables with the register qualifier, like:

ATOMIC_RET(ptr, __ret, stmt)
({
	register typeof(*ptr) __new, __val;

	smp_mb__before_llsc();

	do {
		__val = load_locked(ptr);
		stmt;
	} while (!store_conditional(ptr, __new));

	smp_mb__after_llsc();

	__ret;
})

Now, I'm not at all sure if register still means anything to GCC, but in
the faint hope that it still sees it as a hint this might just work.

> static inline int atomic_add_unless(atomic_t *v, int a, int u)
> {
> 	return ATOMIC_RET(&v->counter, __old,
> 		if (unlikely(__val == u))
> 			break;
> 		__new = __val + a;
> 	);
> }

And that would then become:

static inline
int atomic_add_unless(register atomic_t *v, register int a, register int u)
{
	return ATOMIC_RET(&v->counter, __val,
		if (unlikely(__val == u))
			break;
		__new = __val + a;
	);
}
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://lists.linuxfoundation.org/pipermail/ksummit-discuss/attachments/20140508/f1f6527f/attachment.sig>


More information about the Ksummit-discuss mailing list