[Accessibility] Re: Localized braille (Was: Gnopernicus and ISO-Latin2 characters)

Samuel Thibault samuel.thibault at ens-lyon.org
Fri Aug 26 17:52:36 PDT 2005


Hi,

So as to reduce polluting, I just send this on `accessibility'. We
really need a mailing list for this topic.

Bill Haneman, le Fri 26 Aug 2005 11:42:43 +0100, a écrit :
> In order to do reasonable i18n work, one needs to be able to do
> string-to-character offset conversions,

mb*towc*() functions are here for that.

> determine the type of a unicode character,

There are iswupper(), iswspace(), ...
What is indeed missing, for instance, is a function for determining
whether a character is a combining accent (UCData may provide that).

> perform collation,

wcscoll().

> Unless the encoding and output libraries are designed to communicate 
> this information to one another, you can't remap the output cell 
> positions to the original input string properly.

Isn't the `offsets' parameter of braille_encoder_translate_string
sufficient for that?

> If C has all we need, then why do these other libraries exist? (i.e. 
> ICU, Apache apr, Qt's unicode methods, glib's g_unichar/g_utf8 code, etc.).

Standards are always slow to move, while third-party libraries are
easy to extend. Now C provides quite a lot of functions. Of course it
won't ever provide _every_thing that people would want for working
with unicode. But I had a quick look at gnome-braille and the only
thing that I noticed to be missing from libc/iconv is the canonical
decomposition function. Glib-2.0 is about 500KB. Using only the
canonical decomposition function (and all its dependencies within glib)
reduces that to around 250KB. That's still huge compared to the size of
the UCData library for instance, which provides a decomposition function
for something like 16KB (and works on any C system).

Well, if glib was able to better isolate unicode functions, it would be
different.

> Also please note that glib is ported to many platforms including Windows 
> and GPE.

Yes, but it will never be as portable as just using plain C99/posix
functions.

> In any case the GObject dependency is something that 
> could be eliminated if the benefits were significant enough;

GObject is about 200KB... It seems quite overkill to me, while just
using C function pointers might be not that hard. Locale management is
an example of really hard stuff, and GNU libc doesn't use GObject for
that...

> and we clearly must avoid implementing our own unicode processing
> library in this braille code.  This is not just because of the
> undesirability of code duplication in general, but more importantly
> the need for timely one-spot bugfixes; no unicode library is bugfree,
> and the unicode standard continues to evolve, so getting it right is
> not a trivial task.

Of course, but just standard C functions and a small library like
UCData may be sufficient (and they are maybe the most bug-free
implementations).

Well, to sum it up: I am not personally against GLib/GObject, but my
fear is that people may not want to use this library just because of
the size of these dependencies that could potentially be avoided. Dave,
Marco?

Regards,
Samuel
[1] http://crl.nmsu.edu/~mleisher/ucdata.html




More information about the Accessibility mailing list