[Accessibility] 6/21/06 FSG Accessibility meeting notes

John Goldthwaite jgoldthwaite at yahoo.com
Wed Jun 28 05:38:38 PDT 2006


This is the transcription of the first hour of the meeting.  I'll finish the remainder later in the week and post it again then.

6/21/06 FSG Accessibility meeting notes        .
Attending:        
            
Gunnar Schmidt
Janina Sajka
John Goldthwaite
Bill Hanaman
Aaron Leventhal
George Kraft
Ariel Rios
Cathy Laws
Larry Weiss
Doug Beattie

Janina- We got most of a recording for last week but I was not able to get it up until today.  Are draft notes okay? No, we’ll clean them up and post again. We have some additional audio archives that will be posted on the server tomorrow.  I want to thank George and IBM for LSR and Willy for orca because I now have two screenreaders for linux that work.   I used lsr and orca on the plane to Montreal and listened to files without converting them to text and listening to them at the command line.  I was able to read files in Open Office.  Thanks to everyone who has been working on this.  I think there is value for even ordinary readers to start using them.  It is fun to work with and great to see happening.  We have several areas of AT-SPI to continue from last week.

AT-SPI discussion
Bill- the topics I assume are most important are the ancestor collection and the validation topics.  I think it was George that raised the issue of how we are going to move toward having an ABI.  Could we have a quick idea for the urgency of this as a context for the validation discussion- the need for an ABI that we can create a conformance test for.
George- how do we validate a linux distribution to have the infrastructure for accessibility that we need? Do I have C bindings for minimum increment, if it was spec’d is the distro providing it? Are they providing the interfaces and am I using the correct interfaces. The LSB is asking our workgroup, at the ABI level, what can we import, what should we be look at?  They will pull the ABI and header file information in to their database.  They will create existence tests, they will role that into the LSB product standard and branding and the distro’s will start certifying against that..  Now they are doing live ATK and we could go one step further and say here is the library for the AT side. We’ve had discussions the last couple of days about libtspi vs libspi, which is the normative interface. 

Bill- you mentioned libatk, doing performance test against lib atk will not be as useful you’d like because ATK itself doesn’t provide any implementation. If pure ABI, it doesn’t require that the ABI actually to be functional,  it is legal to return useless values. Since it is just an interface definition and doesn’t provide an implementation, it is legal. If the library is present, it doesn’t guarantee there will be a non-null implementation of those interfaces. 
George- we are trying to write a test case or two to demonstrate that. Having it use the full infrastructure, making the atk calls are going all the way through the AT/SPI layer and working to test that.  Our testers are trying to overcome some problems and then we’ll put out a tar file of the test cases.
Bill- it is going to be a challenge to make that test useful.
George- In the LSB frame of mind there are two different levels. ABI existence test- is the library there, are all the ABI entry points there, are the ABI symbols there. You can test that and the Distro’s are all over the place as to what level is known, what is the least common denominator, of ABI that is there?
Bill- We can certainly do an existence test of atk and that is not useless. It tells you a lot.
George- The LSB can do that with little or no effort; LSB has automated tools to do that.  Might be useful to do that on the AT side. You are educating me on cspi vs spi and which is really the normative interface. I just sent out an email that I had found 7 things that used cspi.
Bill- I replied to that, of those seven things- [break in recording]  , there aren’t that many clients. Sicore and gok are the same.   Branch and keyboard shouldn’t be linking to those at all. 
George- LSB is trying to do a 1-2 year out look.  They could do some technical work for us on the ABI existence tests.
Bill- if it is 2 years before the tests will be used that is one thing.  If they want to 
George- found library, symbols, just the behavior
[recording resumes]
Bill- you are saying, even the existence test 
George- The reason I wanted to bring that up is that there are linux distributions that are shipping Gnome that don’t have any assisistive technology at all, they are stripping it all out.
Bill- that is important to keep in mind. We have a couple of tangents here, we don’t want to ship anything that’s half baked, we want to set the bar high and be ambitious. On the other hand, there is something to be said for doing something smaller in scope initially, to establish that distro’s are actually doing something. If we set the bar too high and have to great a standard, it is going to take us a long time, let alone the distro’s.
Janina- we should think about this incrementally. Is we know the things we put in today are the things we will stay with in the future, as we get more specificity around this, we allow distro’s to work into supporting accessibility, turn this into a feature. The fact that we are not ready with the whole thing is going easier on them, Allow them to work into it. Just be up front about it, this isn’t everything, more will be coming. We will have some features now, add as we get them ready.
Bill- what we want to avoid is putting out something and finding we started on the wrong foot.  We’ll probably go back to that topic again. That is why these fundamental questions keep coming up.
George- with LSB and linux in general, for specifying if we are just look at the ABI level, we’re not getting into behavior. It’s just are the symbols there,   If symbols change or the behavior changes, linux libraries allow you to put multiple symbols and have symbol versioning. We can have two calls with different symbols and version it out. We do paint ourselves into a corner, we will have a way around it  Linux mimics the symbol versioning of Alpha Solaris(?).
Bill- that gives us some context for this. You are saying we have an opportunity to have some existence tests written for us, we just need to identify which headers and definitions for symbols that the automated system can use.
George- Yes, they have a program called LSB libcheck that has a giant list of the libraries and knows all the symbols in the libraries.  It runs across the system and checks them all..
Bill- it would be interesting to investigate how the headers generated by the CORBA IBL compiler would feed into that process.  We would want to avoid specifying things that were,... 
We don’t want to overspecify by pulling in things that are incidental and not things we are intending to be normal. It would be worth doing some investigating to see if there-
George- scrub the header file a little bit
Bill- or put it through as it is and find out if there is anything that we should not be standardizing, that isn’t OMG CORBA spec dictates or isn’t something we thought we were dictating in the IDL. Do a test run so that we can identify and understand the dependencies.
Gunnar- or something that is too specific to the kind of binding approach used for the CORBA p bindings that will make it more difficult to use Dbus later.
Bill- that is an independent thing.  What we would be specifying there would be a CORBA ABI, that doesn’t mean that . Our plan of record is to have multiple possible validations.  For D clients on current distro’s the first way to pass the test is to have that ABI layer in place.
Gunnar- if you have an ABI, not from the IDL point of view but from a linking point of view. If you have an API and you have several implementations of the ABI, it doesn’t matter if one uses CORBA and the other uses DBUS as long as all the messaging protocol specific stuff is abstracted away.
Bill- I don’t think that is possible in C.
Gunnar- if we don’t have something abstracted away and if we push for LSB standardization of it, then we give up the many worlds approach.  
Bill- I don’t agree. It means additional worlds will have to be dist(?)  before we can valid them. We go on record as supporting multiple validations.  The first draft of the spec will have a single validation and we will have a process for additional validations.  That is the plan of record. 
Gunnar- the aim of the LSB is to provide ABI interface for 3rd parties which is identical on every system used. So Assistive Technologies shipped with the distribution would not be the important factor here. If there is a third party writing an AT, is there an ABI that is guaranteed to work now and also if Dbus is used in the future?  If we have tspi in the LSB, doesn’t that force every application to use tspi. It is the only ABI that is guaranteed to exist no matter which messaging protocol you use.
Bill- yes, but that would mean that it was the only ABI that an application could use if it wanted to work on all LSB platforms. 
Gunnar- that’s right.
Bill-That rules out orca.
Gunnar- not necessarily.  Orca is open source.
Bill- if it means changing your source code radically, we need binary compatible replacement or at least a source replacement for Pi Orbit that uses Dbus in the back.
Gunnar- does it mean that orca is tied to CORBA and is possible to use another messaging protocol in the future without writing and re-writing?
Bill- I don’t think orca is tied to CORBA.  orca doesn’t know anything about CORBA, and I don’t think orca touches CORBA.  There is a layer or separation between the orca client and the CORBA detail, just as cspi clients have a separation between them and CORBA details. The separation layers are different in those two cases.  If you look at the integrated stack, the only common ABI is the CORBA C binding.  If you want to go even further, I wouldn’t stress this because no such clients exist but if you have a JAVA client even that ABI wouldn’t be there. The only common ABI would be a wire protocol.  I agree that standardizing on the wire protocol being CORBA is tricky. If we were to do that we would want to have a very explicit plan for supporting additional wire protocols such as DBUS in the future. There is a downside to standardizing at the protocol level. Plan of record  has always been that the ABI is the normative interface. What that means the IDL is normative but you
 can’t validate directly against the IDL. To validate the IDL, you have to validate it in a particular ABI environment. You have to define an IDL to protocol or IDL to binary transformation.   The CORBA OMG spec is one such transformation. It is the transformation that is the best thing we have at the moment.  We have a stated desire to move away from it to another ABI, but that ABI is not ready yet.  If we had an IDL compiler for Dbus, we would have a more straight forward path to create the ABI. I don’t know if we ever will have an IDL compiler for DBUS, I wouldn’t say that was an absolute requirement but it would make the process cleaner and simplify the automation of ABI tests. The only ABI’s we have are cspi and the CORBA C bindings, practically speaking.
Gunnar- or cspi and the Python bindings.    
Bill- well, 
Gunnar- one way to do this is to say we guarantee that when we manage to replace CORBA with Dbus, would need to be sure that cspi and the Python bindings continue to work.
Bill- yes, but that isn’t many worlds, that is just picking a layer that we can reimplement the backend of later. Although that is an interesting solution,  I’m not comfortable standardizing on cspi.  I would like to not have to support cspi in the future, I would like to see it go away.  
George- what about spi, the CORBA bindings?
Bill- I don’t have a problem with the CORBA bindings from a legacy point of view.  Keeping them up to date is just a matter of running an IDL compiler.  Making sure the symbols exist, etc. CORBA bindings you get for free if you have an implementation of the IDL.  I don’t have any problem supporting any automated stub or skeleton system because it is derivative from the IDL as long as we agree what the IDL should be.
Gunnar- if LSB standardizes on spi then we require that every LSB system must ship with AT-SPI on CORBA.
Bill- no, I hope not. The purpose of the many worlds plan is to make sure that we don’t lock anyone into a CORBA only future. We could have a Corba ABI that we can validate against while we work on new a ABI. Then at some point in the future we can add the new ABI, not to the list of required things but to the list of conformant things. It can be either or. A conformant platform can provide either of the backends.  The other alternative as you say might be to standardize on cspi and Python.  I’m not too keen cspi and I wrote it.  I don’t think it does anything nice for you with an attempt to hide CORBA from the client. That is really the only advantage that it has and it does not do that as well as you’d like.
Gunnar- one approach might be that we’d standardize the IDL in the accessibility working group but don’t ask the LSB to standardize an ABI at the current time. If there is a need for third party AT to have a ABI that is guaranteed to work.
Bill- its not just third party AT, orca is third party. If I am a user and if I want to run orca, I want to know that a distribution that is LSB compliant will let me run orca or gok or whatever technology I need. If I am deploying a desktop for a workplace and I want to accommodate people with disabilities and somebody says if you are not running Windows then you need to be able to run orca.  Then I would like LSB accessibility to give me some confidence it is going to work.
Gunnar- can we provide this ABI that is guaranteed to work if we are not going to standardize on CORBA?
Bill- not in the short term but yes in the long term.
George, in the short term there are no Dbus based AT’s
Bill- and there are not any Dbus based service providers
Gunnar- exactly, so maybe we don’t need an official standard at the current point in time.
Bill- the lack of the standard means that we can say nothing about whether orca will run.   If we wait to have alternatives, it means we will have to wait to get a standard.
Gunnar- No, I mean we could have a standard on the IDL but not make a standard on the ABI,
Bill- The problem is that if you have a standard that you can’t validate against, you have no way    Gunnar- We could provide validation within the accessibility group but this is different from adding these interfaces to the LSB.
Janina-Let’s put this conversation off and move on to the next topic.  This is a discussion we could spend more time.
Bill- I’m a bit frustrated because we have covered this several times and reaching what I think is consensus but it keeps coming up.  This is ground that’s been plowed many times.  Lets move on to ancestor collection. There is a suggestion that we need an ABI that allows selection of the ancestry of a particular accessible object, a known node in the hierachy.  I question the use cases where that is important. I had two issues with it.  Is it needed in the collection interface, where I thought it was a bit of a strange fit. The second question is how significant of an addition is it, that is  how valuable it would be, so we could assigning a priority.
Larry- the common use case is that your focus is based on your context and what your AT wants to report about the focus. As focus changes, the context changes, for example which panel you are in. To get those and to do the comparison, you need to make multiple calls up the chain to find out you are in a panel that is in a panel.  As a result we would probably not report that even thought it might be useful.

Bill-as focus is changing, the client needs to maintain caches of the local environment around the object. The assistive technology that I have seen so far that are clients of the API have either maintained a cache so they haven’t had to repeat a lot of API calls when focus changes except in unusual cases or they took the opposite approach caching nothing and grab everything on the fly. The interesting about the ones that grab everything on the fly is that they seem to have good performance.
Larry- you keep mentioning that ATs keep this cache of the locality around the current object, the question I have is how do they get the info in the first place.
Bill- by walking as much of the tree as they need to get the information.  Or step wise moving up the hierarchy and applying heuristics as they go.
Larry- what is being requested is rather than making five calls up the hierarchy, but you would be doing it on a regular basis. Unless you are also collecting the siblings at the same time to know the parent hasn’t changed.- you have to go at least that far.
Bill- remember that this is happening in user time. Focus is going to be changing in user time, the places where we run into performance problems is where we have to sift through a large piece of content looking for information for the user top down or doing screen review and we are exhaustively searching through the on screen content.
Aaron - I’d like to challenge that; in the work we are doing with Ajax, there are a lot of dynamic applications.  You might have an application that has live statistics updating on the screen and these are in containers that are marked as live regions and these live regions have different properties on them. When there is a change in any part of the page you need to quickly move up ancestor chain and find out what live region it is in to determine what to do with it.
Bill- I would argue that is information that you want to cache about the object. If you know a object is of dynamic interest even when you are not focused on it,   Either you look at the focused object only or you say that this object is of a type that I want to know what is happening to it, even when it doesn’t have the explicit focus.
Aaron - but that means when the page loads you will have to go through the entire page looking for all the live regions or with the collection and you have cache all of the live regions, everything in them.
Bill- Or you have to wait for an event and you can see that it came from a live region.
Aaron- How do you know it came from a live region unless you go up the parent chain?
Bill- you don’t want to do even a simple API call on every item that emits an event. That is a piece of information that you can do it once and have to do some homework to find it out, but once you have identified it as a live region, you don’t want to throw it away and keep asking for its ancestors.  You only have to do that once for a given object.
Aaron -don’t you think the caching will use too much memory and ancestor collection is elegant and gets you what you want fairly quickly?
Bill- No to the first and yes to the second. If objects are live you are going to have to cache things anyway. It is much smarter keep them around and  to cache their contents than to try to determine an event happens, if it is coming from a live object or not. Ancestor collection is not going to be as efficient after the first call as just looking up in your local hash table or cache and saying this is a live object and I need to do something with it or this is an object that the user explicitly told me to watch. All those are possibilities but I don’t think it is a panacea.  The actual performance costs of walking up the tree, you won’t get 100x from this, maybe 5x for a very specific activity which you only need to do it once for an object. If performance is a problem, you only need to do this once and put the information in the cache.  If you don’t want to cache, our experience in other less dynamic content is that walking up the tree to find ancestors doesn’t seem to
 be a problem. If it is happening in user time as a result of user action, performance is not going to be a problem. If it is happening because the dynamic content is changing, then yes there are going to be performance implications. You don’t want to do a ancestor call on every event, that is not going to solve your performance problem, you will need to do something faster.
Aaron - I hope you are right, I haven’t had to get that deep into the code. I think it is useful, I think its an elegant thing to have around. People say they want it so it is risky to say that caching will solve everything.        
Bill- how is that risky? What is the risk?
Aaron- we can add it later? Is that what you are saying?
Bill- yes, if we find that caching is the right solution, we could add it later if we have some data that performance is a problem.
Aaron- I don’t know what it will take for orca but if you have a screenreader developer that would to plan to do it one way and have to do a lot of caching, the risk is that they are going to have write a lot of code that they are not going to be happy with.
Bill- that I don’t think is a real risk.  This is just an opinion but opinion colored with a lot of gnopernicus and gok if not orca experience, if it happens in user time, performance is not going to be a problem. In those situations where performance does become a problem, you rely on a cache. When I say cache, I don’t mean an whole off screen model.  If the reason you are looking for ancestors is to find the context of an object, it makes more sense to flag the objects that you have already identified as interesting or the containers you have identified as interesting than to use some other kind of filtering. Instead of having to call for the ancestors, you can say I have seen this object before and I have this object on hand in the ancestor list. It is only a string of pointers, it only a string of IORs. It doesn’t take up much memory.
Aaron- but you would need one for every container to know it was in that container in the first place.
Bill- no, my point is that if you have an object reference if it is an object that you’ve ever seen before. You can either keep a cache of objects that you have seen before which is the why we have ManagesDecendents because we can identify containers that do have to many contents to do that with and you don’t want to keep a list. You are right, depending on how you implement the cache it can be difficult to tell between an object you have never seen before and one that you saw before, deemed uninteresting and discarded. There is a problem if you have written your code so that every time you get an object, you treat it like you have never seen it before. Then that is going to be a problem but that is a problem if you have ancestor collection or not. Then getting an ancestor list, what does that tell you, every item in the ancestor list has to be independently queried if you want to get any semantic information out of it.
Aaron - you only need to compare the new ancestor.  Say you are on ABCDE and you move to ABCFG, you only need to look at F and G now.
Larry- and C, you are going to have to look at it to prove it is the same.
Aaron- As the user moves around, the change of context gets spoken. Say they move out of one group box to a radio button in a radio group that is in another group box. What you want to know is what part of the ancestor chain changed so you can get the labels, descriptions and roles and say what has changed but the document is still the same, the form they are in is still the same.
Bill- gnopericus does this by walking up the tree every time focus changes, not a issue because it is happening in user space.
Aaron You think that 5 or 6 getparents is not a big deal? Instead of make it into one call?
Bill- It is not a big deal, it would be convenient but is not a big deal. Where you start to feel the pinch is where you have a big dynamic webpage or god forbid an animation and every object in the document is moving around.
Aaron- a more realistic case is something like CBS Sportsline Game Tracker which changes statistics on the screen in a table for each player. There a lot of reorder, show and change events.   For every one you are going to want to know, am I in a live region and do I care about that live region. With the match rule you could only ask for just parents that are live regions and get only one back for each one.  Whenever there is a show event, you would only have to do one call.
Otherwise, when I load a page I am going to have to give me all the live regions and I have to  cache all of the descendant of each live region.
Bill- what are you talking about on doing just one call for a show event?
Aaron- because you could do the ancestor collection a match rule only for live regions.
Bill- why do you want to do ancestor collection on an event. Why do you want a collection, you either want a vector of the ancestors or you don’t.
 is either one is a vector
Aaron I only care about the change event that are in live regions.
Bill - You are going to have to filter those, you are only going to get live events. 
Aaron- I only want to process those in live regions
Bill- you need to look at the event and see if it is in your live region list.
Aaron- Say the event is a great grandchild of the live region, so I get accessible for that great-grandchild and do getparent, getparent, getparent, all the way to the top of the document to prove to myself if it is in a live region or not.
Bill- That is one possibility, but you have to only do that once. Once you determine whether it is or is not,  you can cache that information and you can retreive from cache much faster than you can do API calls to collection or getancestor.  It is still much faster, just say is in it in the cache.
Aaron- the second time it is fast, the first time will be slow if there are 20 calls
Bill- its going to be slow, maybe 20 microseconds.  We get about 10,000 round trip ATI calls per second.
Aaron - okay maybe its really is not a problem.
Bill- I don’t think it is enough of a problem to change your programming model. [55] risk if changing programming strat.  Even if we concense
Aaron? Lets table it for now, come back to it if performance is a problem.
Bill- return ancestor list as vector, rather that a series of getparent calls. I;d see that as at least convenient.  I’d argue for not implementing it right away, down on the list.  Larry, do you have any issues.
Larry- Aaron and I have a similar perspective, going with our gut feeling right now, need to measure the performance.
Bill- anything happening in user time, I’d discount the problem.  If we can come up with a strategy for dealing with live regions with the current model, good.  
Larry- Let me back up, could you define user time.
Bill- changes taking place in user interface that are happening because of user action,  User press the key or uses a pointer.  Users can only move so fast.  The trigger events for user events are only going to happen at a low rate.  The end user is going to have absorb the effect of their action.  If listening to a screen reader talking, I am in control.  As long as the software can output faster than the user can interpreting it.  If looking at dynamically updated event screen, could be coming from a something very fast.  Say you load a document, the AT might see a huge stream of insertion events.  AT decides when the event is over.  That is way we wanted wait, complete calls to handle those cases.  – not user time in the kernel hacker sense.
Larry- I agree the user response time is a lot slower in computer time.  But have problem if deciding when screen updates are finished, 
Bill- when user does something, we throw away the environment. If events are coming from out side we don’t stop the presses.  That is when it needs    Keep latency low. Active region is important.  One thing we have done in screenread is to allow the user to add a watch to things.  If an object can be discovered in object hyerarchy, can set a watch on it.  Will let know you, if it changes.  Also can have it ignore all changes from an object.

Larry- the live region is likely a container for something that is changing.
Bill- watch is likely all things in the live region.
Larry- if change =

Bill- there is a piece of collection API that might be of use.  Can ask if an object is in the collection list.  Instead of asking the object if you are in the active region.  Ask the collection if the object is there.
Larry- get ancestor of
Bill- if we have 20 active regions, still a lot.
Larry- have 3 of them we are interested in.  We’ll have to see.
Bill- I could see this going into collect, thought it is in accessible.
Larry- why aren’t get descendent in accessible.
Bill- assume, those are things other than document.  Useful for things that are containers with many children, might be not very useful.  If you put it in accessible, get many nulls back.  The model of in user focus is either on the object with state focus or is the state focus.  On decndent
Larry-
Bill- any object that AT would like to , is one you’d like to do collection.

Bill- if the active one isn’t visible, you are out of luck.
Larry- something
Bill- if active object is off screen you have a big problem.
Larry- initial focus is not active
Bill- seen app’s where initial focus is not on screen.
Larry- with table.. Active descendent is being cried out for.
Bill- I’d prefer to leave it as a recommendation for the moment.
Larry- 
Bill- it is a recommendation to the person writing the accessibility support.  Some things that may be difficult to implement.   

Bill- talking about implementers of atk, libgail, there is a bunch of things that are higher priority that haven’t been done.  ATK text is not implemented, not actually possible ot the information inside gail.  As implementer you commonly have things that the toolkit can’t provide.  Items are that are left to the implementer- what is the size of the text being rendered.
Larry- for collection, there is no corresponding atk collection interface.  How can AT say that it would be nice if collection was here.
Bill- the AT would have to ask if collection is present. Bridge, Bridge can’t implement, it verboten, you are warned that it may be too big.  For a spreadsheet there might be 300,000 children.  There a few methods were, most are implied to be complete.  Not expecting it to be 

Not saying it should be there, just that Bridge shouldn’t do it.  Those are the things where collection will be important.  I’ll have to think about getactivedescendent.
Larry- go through
Bill- Bridge could do that.  If bridge wasn;t able to find, would return null.  Can’t expect it to always work.
Bill- could be a big problem, we might introduce more problems if we try to implement it in the Bridge.  
Larry
Bill- if we introduce , that makes, if collection becomes unreliable for clients or produces performance problems we have done a disservice.  It would be better for it to not be there that be there but be unreliable.
Larry should be done on objects that don’t have 
Bill- don’t think we’d be able to do a good job of it in the Bridge.  Things like Firefox and Open Office.  Rather focus on the places we can do a good job.  We’ll learn more as we get into it.  There certain kinds of bugs that would break.  Heuristics that look good, but don;t work in practice but
Larry get accessible at point
Bill- yes, mostly a problem with table.  Get accessible at point doesn’t return the right table cell. Can’t identify a range of

Janina- next week?
Bill- won’t table the discussion but will defer ancestor collection until we have more consensus on the neccessity.  Still struggling with ATI performance testing.  LSB is asking for things to add to their conformance test.  


Bill- need to have frank discussion about not locking in an ABI. Need to be able to handle complementary or alternate ABI’s in the future.
Janina- sounds like we have a reason to invite them, but do we need to reach better consensus first? I won’t be available for July 5th and 12th due to blindness conference.  
Bill
Janina- okay we’ll meet next week, topics to be determined.  No meeting on July 5th, resume meeting on July 12th.




John Goldthwaite
jgoldthwaite at yahoo.com
828 885-5304
 		
---------------------------------
Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls.  Great rates starting at 1¢/min.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.linux-foundation.org/pipermail/accessibility/attachments/20060628/a4207777/attachment.htm


More information about the Accessibility mailing list