[Accessibility-ia2] media a11y
silviapfeiffer1 at gmail.com
Tue Jun 21 20:24:28 PDT 2011
On Wed, Jun 22, 2011 at 12:04 PM, Pete Brunet <pete at a11ysoft.com> wrote:
>> 2.) There's a "User Requirements" document that we created in our
>> W3C work on the accessibility of HTML 5 media that people should know
>> about. If I may be so bold, you may want to bookmark:
>> We intend this document as a introduction to the full
>> range of user requirements for people of all kinds of
>> disabilities. I think we're pretty close to covering
>> that landscape, and we will try to add to this document
>> as remaining issues are clarified. It is indended this
>> document will become a non-normative W3C publication,
>> probably as a "W3C Note" published by the Protocols and
>> Formats Working Group (PF) of the W3C's Web Accessibility
>> Initiative (WAI).
> This is a very good document.
> There is a sentence that seems at odds with something Sylvia said, i.e. "The
> current solution are audio descriptions and they are much harder to produce
> than text descriptions." The document says, "The technology needed to
> deliver and render basic video descriptions is in fact relatively
> straightforward, being an extension of common audio-processing solutions."
Not sure what is at odds here, but maybe this part: I was talking
about how hard it is to author audio descriptions in comparison to
text descriptions, while the document is talking about how easy it is
to deliver video descriptions. I don't really see a contradiction
> But, none the less, I can see some advantages to using VTD (video text
> - no need to find (and pay) a talented (pleasing to listen to) speaker
> - no need to find a speaker whose voice is a good match for the audio track
> (easily distinguishable from the other speakers)
> - ability for screen reader user to adjust the playback speed, pitch, and
Yes, I totally agree.
> In the section on extended video it says, "Extended descriptions work by
> pausing the video and program audio at key moments, playing a longer
> description than would normally be permitted, and then resuming playback
> when the description is finished playing." There must have been some
> thought about how this would be done, i.e. what mechanisms are proposed for
> this? The AT user could use a context menu using standard GUI accessibility
> or failing that the AT could provide access via IAccessibleAction (or ATK's
> equivalent) on whatever control will be provided for this. (This same issue
> is covered in Enhanced Captions/Subtitles, especially requirements ECC-3 and
> That document points to this blog entry:
> where it says, "...subtitles attached to the video can be sent to an online
> translation tool and converted to whatever language you want on the fly.
> syncing mechanism.
The syncing used there is for captions and subtitles rather than text
descriptions. Captions and subtitles are synced with the video's
timeline. That's relatively easy, because it doesn't need an extension
of the timeline which is what text descriptions need.
>> 3.) Let's be sure to think in terms of rich text handling. Our media
>> work at the W3C has forced us to recognize that the text we will be
>> passing to a11y APIs will sometimes contain markup, and we'd like to see
>> assistive technologies dealing with the markup appropriately. We're
>> still working on how best to clarify this in the ARIA support
>> documentation that is being produced by PF, but it's not too soon to put
>> this consideration on the table here.
> I think the UA rather than the AT should provide a rendering of the marked
> up text. That rendering would be a simple text string plus text
> attributes. Please see the IA2 text attributes at:
> This would include support for portions of text that are in different
Can you help me understand: How would that solve the issue of timeline
extension for cues that take longer to listen to (or read in braille)
than is available during the video's playback time?
More information about the Accessibility-ia2