[Accessibility] Accessibility Conference Wednesday 1/26/05 I/O
jgoldthwaite at yahoo.com
Wed Feb 16 14:51:33 PST 2005
Accessibility Conference January 26, 2005 I/O session
Have a range of issues for I/O,
- Braille display would like to use all apps including
- Same for speech- how do we use multiple languages
- What are the requirements that we need to
Want to be able to use multiple audio devices media
player plus synthesizer
Last- we will have a break out tomorrow to see what we
want to say about Kernel and I/O, need prompts at
boot up. Had a Alpha64 that sent prompts to the
serial port. Made it possible to do boot up. We
should look at the stack see what needs to be added to
increase accessibility. Grub, bootloader,
Genericize, Kernel has always taken input from a
serial port. Gives you a robust system.
I dont think we need to say that every body needs to
use a particular synthesizer or screenreader. Users
should be use the tools they want. We need to resolve
conflicts between applications and AT. Users need
compatibility. Want to be able to move between
Braille displays are very expensive as are hardware
synthesizers, want them to work with everything.
What works well for your application? What breaks?
Want to get into those today and in the breakout
- candidates for ways to handle speech- speech
Peter- since we have breakout sessions to go into
detail, we should use this time to set requirements.
I suggest that support heterogeneous environments- GUI
and console. Users want to do thing from their
console and from Windows with sc
Should be able to use the same ptss engine to do all
Bill- clarification- are you talking about console in
the presence of multi-user environment vs. console and
single users. I think there are some requirements of
Peter- a ptts engine that requires a GUI to run are a
Bill- there are situations where you dont want to use
multiple instances of the tts and want to share one
Peter- there are some hard limits on what you can
support- Bill is an example. How do you use a GUI
based tts at boot up with the console.
Bill- it may not be feasible for two users to share
the I/O device
Peter- think of a terminal server is that it?
Bill- if I have a console open as root and a console
as user, both trying to the same tts server. Do we
want to support multiple users?
Hynek- we are mixing two issues- one how to get
multiple applications at once- we need a dispatcher
app like tts-dispatcher. The question of audio time,
what do we do with wave forms we get. We have many
questions about multimedia frameworks
Bill- do we
Janina- I think we want multiple applications to use
media. The details are implementation questions. We
should be trying to solve the problem here. An other
piece- do you only give speech to the applications
that has focus. There are some times you would want
to hear announcements from other applications.
Frank- there are some people that want to use a phone
Kirk Reiser- if were talking about switching between X
window and console, you are going to be using the same
Janina- it wi
Kirk- you are going to be running Xserver on your
local machine. Dont think multi-user is an issues
Frank-When something pops up on tty7 when Im on tty1
you want to be able to get audio from it. You need
Janina- lets clarify what we mean by multi-user. I
often login multiple times. If you are talking about
a terminal server, I dont see how you can provide
audio to multiple people.
Will- there some things you want some things to
override- a clock,
Frank- the multimedia problem is different because of
how we use audio cues. There isnt a environment that
allows us to get all the information that we want in a
2D or 3D space. If we use 2D space we can deal with
many more stimuli.
Will- I think it is still a multimedia problem.
Hynek- the question of whether the info should be
sequential or simultaneous is dependent on the users
preferences. It is something that should be able to
handled by a media server. I agree with Mark, what we
face with multimedia servers, the requirements of
access are very high. The latency of multimedia
services is important to the servers. They are
thinking of playing some radio stream. Much different
we using speech for feedback on all your work. How to
integrate multiple framework together. I
Al- we can give names to all the issues, but the
problems are not easily anticipated by media
providers. The access community has requirements for
multimedia that need specs defined.
Janina- need to look into this. We need to look the
future and see what is needed.
Milan- when developing advanced accessibility
solutions, we find problems with how to handle
messages. How do you handle . Consider the general
solution for how to handle media.
Bill- comment Kirk ability of multi-user sharing
hardware. There is the scenario of applications that
have processes that run as root while you are working
as a user. Another, in current Xwindows desktop
model, the shared space is the X desktop. Its not
unusual to have applications that are remote or remote
and owned by another user. This share the video space
but not the audio services. The external user may not
be authorized to send things to you serial port.
Larry Weiss- we are all saying that in multimedia,
text to speech is a special class of multimedia. Want
to have multiple sessions of talking applications that
will be treated differently from media players. In
many cases, youd want your speech stream to be serial
except in some situations for alerts. These may be as
sounds or a second tts stream.
The share characteristic, dont want to change speech
rate for each, dont want to set volume unless you
choose to have it different for each.
Janina- lets look at Braille briefly. What kinds of
things have people run into.
Peter- just wanted to say that the problem of Braille
is simpler. It seems like we are coming to a solution
with a Braille driver at a low level. It is easier
in that only the AT will be trying to write to Braille
device so we dont have contention problem.
Frank- I think this server encompass Braille and
speech together. You need to be able to determine
where you want things displayed. ( Groans )
Bill- Might it be sufficient to synchronize Braille
Peter- The requirement can only be really good
Peter- Gnopernicus does those three things at the
application level. We are going find AT that wants to
do thing in different combination.
Al- well said. This a war Ive been having with the
CSS group. In web architecture, CSS thinks it owns the
presentation space. They have a technology for
sniffing out needs, have separate canvas for video and
audio. User needs to be control their environment by
profiling to the system.
Peter- one of the strong arguments around a strong
manager for speech, we have many things getting in
there and preempting. In the AT world we have a
smaller group of wanting to use resources.
Frank- Didnt say it was going to be easy. May take a
while but we need to develop standards that will allow
us to be efficient in all areas of life. To be as
efficient as sighted people we need this stuff. When
my syth says some thing, I need to be able to hit a
key and have it displayed in Braille. There are
important if we are going to be as efficient as
someone that is sighted.
Janina- we will continue in more detail tomorrow.
Peter- you said something, you want to hit a key and
put it on the Braille. If speech from some other
window, is that something you want to have displayed.
Your screenreader should have control
Frank- possibly but you never know what is going to
happen. Talking about keyboard layer.
Peter- if want
. To meet your user requirement, x has
Frank- you need to override the focus. Girlfriend on
computer, some problem pops up, should be able to pop
up on Braille display.
Peter- trying to get to the User requirement that
creates the need for audio and Braille to be managed
together. What is
Frank- requirement is that not step on each other.
Do you Yahoo!?
The all-new My Yahoo! - Get yours free!
More information about the Accessibility