[Accessibility] 03/30 Open A11y meeting minutes

Peter Korn peter.korn at oracle.com
Mon Apr 12 16:07:56 PDT 2010


Whether or not TDDs are being phased out (or simply falling into 
disuse), key is that there's all these new, non-analog technologies with 
which TDDs cannot interoperate.  So some other mechanism is needed for 
real-time-text communication in those mediums.


Peter Korn
Accessibility Principal

> Thanks Peter.  TDDs are real time devices.  Are TDDs being phased 
> out?  That would make sense with the availability of mobile devices, 
> iPads, netbooks, and laptops.
> Peter Korn wrote:
>> Pete,
>> Real-time-text does not refer to human-speech-to-ASCII/UNICODE-text.  
>> Rather, it refers to two people communicating via text, each 
>> typically entering text from some flavor of keyboard, with their 
>> keystrokes being transmitted "in real time" as they are entered.  
>> Rather than only being transmitted after pressing a SEND key 
>> equivalent (e.g. <CR>).  This is seen as particularly important for 
>> E911 services, where someone might not be in a position to say 
>> everything and then press SEND.
>> You could certainly hook up a dictation system to this, but that 
>> would be an additional layer on top of the ANPRM RTT requirement, not 
>> a part of it.  At least as I understand the ANPRM.
>> Regards,
>> Peter Korn
>> Accessibility Principal
>> Oracle
>>> Regarding the minutes, regarding the following:
>>> PK: ... The refresh adds a bunch of new rules. One of the notable 
>>> additions is real-time text for the deaf and a much more significant 
>>> effort on the deaf and on communication technologies
>>> PB: On the speech reco angle, did they talk about whether it was 
>>> good enough for what we want to do?
>>> PK: I don't remember any mention of speech recognition. The NRPM 
>>> says at a high level that someone without hands needs to be able to 
>>> use your app. Doesn't specifically mention speech recognition. Apps 
>>> that support audio and video chat must also support real-time text.
>>> This is an accurate representation of the exchange during the 
>>> meeting, but what I really was asking is, in the case where speech 
>>> reco might be used to transcribe speech, will the current (or near 
>>> in) state of the art of speech reco technology be able to provide 
>>> acceptable real-time text, considering the challenge of large 
>>> vocabularies, speaker independence, the variety of speakers, and 
>>> conversational speech?  Or do the requirements allow for the use of 
>>> real time human transcribers (either local or remote)?
>>> -- 
>>> *Pete Brunet*
>>> a11ysoft - Accessibility Architecture and Development
>>> (512) 238-6967 (work), (512) 689-4155 (cell)
>>> Skype: pete.brunet
>>> IM: ptbrunet (AOL, Google), ptbrunet at live.com (MSN)
>>> http://www.a11ysoft.com/about/
>>> Ionosphere: WS4G
>>> ------------------------------------------------------------------------
>>> _______________________________________________
>>> Accessibility mailing list
>>> Accessibility at lists.linux-foundation.org
>>> https://lists.linux-foundation.org/mailman/listinfo/accessibility
> ------------------------------------------------------------------------
> _______________________________________________
> Accessibility mailing list
> Accessibility at lists.linux-foundation.org
> https://lists.linux-foundation.org/mailman/listinfo/accessibility

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.linux-foundation.org/pipermail/accessibility/attachments/20100412/f11eee18/attachment-0001.htm 

More information about the Accessibility mailing list