[Accessibility-ia2] media a11y

Pete Brunet pete at a11ysoft.com
Tue Jun 21 20:55:47 PDT 2011



On 6/21/2011 10:24 PM, Silvia Pfeiffer wrote:
> On Wed, Jun 22, 2011 at 12:04 PM, Pete Brunet <pete at a11ysoft.com> wrote:
>>> 2.)	There's a "User Requirements" document that we created in our
>>> W3C work on the accessibility of HTML 5 media that people should know
>>> about. If I may be so bold, you may want to bookmark:
>>>
>>> 	http://www.w3.org/WAI/PF/HTML/wiki/Media_Accessibility_Requirements
>>>
>>> 		We intend this document as a introduction to the full
>>> 		range of user requirements for people of all kinds of
>>> 		disabilities. I think we're pretty close to covering
>>> 		that landscape, and we will try to add to this document
>>> 		as remaining issues are clarified. It is indended this
>>> 		document will become a non-normative W3C publication,
>>> 		probably as a "W3C Note" published by the Protocols and
>>> 		Formats Working Group (PF) of the W3C's Web Accessibility
>>> 		Initiative (WAI).
>>>
>> This is a very good document.
>>
>> There is a sentence that seems at odds with something Sylvia said, i.e. "The
>> current solution are audio descriptions and they are much harder to produce
>> than text descriptions."  The document says, "The technology needed to
>> deliver and render basic video descriptions is in fact relatively
>> straightforward, being an extension of common audio-processing solutions."
> Not sure what is at odds here, but maybe this part: I was talking
> about how hard it is to author audio descriptions in comparison to
> text descriptions, while the document is talking about how easy it is
> to deliver video descriptions. I don't really see a contradiction
> there.
>
>> But, none the less, I can see some advantages to using VTD (video text
>> description):
>> - no need to find (and pay) a talented (pleasing to listen to) speaker
>> - no need to find a speaker whose voice is a good match for the audio track
>> (easily distinguishable from the other speakers)
>> - ability for screen reader user to adjust the playback speed, pitch, and
>> voice.
> Yes, I totally agree.
>
>> In the section on extended video it says, "Extended descriptions work by
>> pausing the video and program audio at key moments, playing a longer
>> description than would normally be permitted, and then resuming playback
>> when the description is finished playing."  There must have been some
>> thought about how this would be done, i.e. what mechanisms are proposed for
>> this?  The AT user could use a context menu using standard GUI accessibility
>> or failing that the AT could provide access via IAccessibleAction (or ATK's
>> equivalent) on whatever control will be provided for this.  (This same issue
>> is covered in Enhanced Captions/Subtitles, especially requirements ECC-3 and
>> ECC-5.)
>>
>> That document points to this blog entry:
>>
>> http://www.webmonkey.com/2010/08/mozillas-popcorn-project-adds-extra-flavor-to-web-video/
>> where it says, "...subtitles attached to the video can be sent to an online
>> translation tool and converted to whatever language you want on the fly.
>> JavaScript handles the syncing."  It would be helpful to understand the
>> syncing mechanism.
> The syncing used there is for captions and subtitles rather than text
> descriptions. Captions and subtitles are synced with the video's
> timeline. That's relatively easy, because it doesn't need an extension
> of the timeline which is what text descriptions need.
The discussion of extended video descriptions and enhanced
captions/subtitles talks about a means to pausing/resuming the
video/audio.  Is there nothing in that mechanism which is useful for
solving the issue of AT having to pause/resume video/audio?
>
>>> 3.)	Let's be sure to think in terms of rich text handling. Our media
>>> work at the W3C has forced us to recognize that the text we will be
>>> passing to a11y APIs will sometimes contain markup, and we'd like to see
>>> assistive technologies dealing with the markup appropriately. We're
>>> still working on how best to clarify this in the ARIA support
>>> documentation that is being produced by PF, but it's not too soon to put
>>> this consideration on the table here.
>> I think the UA rather than the AT should provide a rendering of the marked
>> up text.  That rendering would be a simple text string plus text
>> attributes.  Please see the IA2 text attributes at:
>>
>> http://www.linuxfoundation.org/collaborate/workgroups/accessibility/iaccessible2/textattributes
>> This would include support for portions of text that are in different
>> languages.
> Can you help me understand: How would that solve the issue of timeline
> extension for cues that take longer to listen to (or read in braille)
> than is available during the video's playback time?
I think Janina changed topics from cues to providing access to rich text.
> Cheers,
> Silvia.
>



More information about the Accessibility-ia2 mailing list