Skip to main content
G3ict

Panel Discussion on Methodologies for Accessibility Evaluation at the 20th WWW Conference, India

Posted on April 01, 2011

Nirmita Narasimhan

Senior Fellow and Program Director Asia-Pacific, G3ict


G3ict
, W3C and CIS co-organised a panel on 30 March 2011 in the W3C track at the Twentieth International World Wide Web Conference in Hyderabad. The panel discussed Web accessibility evaluation methodologies and their challenges and practical technical survey methodologies alternatives. The panel was moderated by Nirmita Narasimhan (Centre for Internet and Society, Bangalore, India) and featured four speakers — Mr. Shadi Abou Zahra (W3C /WAI), Ms. Neeta Verma (Senior Technical Director, NIC), Mr. Srinivasu Chakravartula (Accessibility Manager, Yahoo! India) and Ms. Glenda Sims (Senior Accessibility Consultant, DeQue Systems).

Panel discussion at the W3C track of the 20th World Wide Web Conference, India

Panel discussion featuring Nirmita Narasimhan (CIS), Nita Verma (NIC), Shadi Abou Zahra  (W3C/WAI) and Srinivasu Chakravarthula (Yahoo! India)

The panel began with an introduction and background by Nirmita on the digital dispositions of the UNCRPD, obligations of states parties and the need to have clearly defined and credible evaluation methodologies for effective policy formulation and implementation. Shadi Abou Zahra gave a brief overview of the WCAG 2.0 guidelines and discussed some of the important points which need to be borne in mind while doing large scale evaluation of websites. He talked about the selection of tools, limitations of automated tools, importance of selection of pages for manual testing, sampling techniques, qualitative versus quantitative analysis, different types of testing such as expert and user testing and evaluation goal and scalability issues.

Ms. Neeta Verma discussed the guidelines for Indian websites brought out by the National Informatics Centre in February 2010 and said that as per the checkpoints in the guidelines, there were a small percentage of checkpoints which could be tested using automated tools, some percentage for which expert and user testing was required. She presented one approach to evaluation adopted by the NIC, which was to certify the CMS rather than individual pages since the latter would be extremely difficult in cases of websites having thousands of pages, as was the case with several government websites. She stressed the need for positive thinking, user involvement and on the need to have an organized community of trained accessibility experts in India to whom the government could outsource testing work.

Srinivasu made a distinction in Yahoo’s approach with regard to accessible websites between existing and upcoming websites. He said that for existing websites, the approach was to do an evaluation, prepare a report and prioritize the issues to be addressed. And as far as new websites are concerned, the attempt should be to keep accessibility in the loop right from the development stage itself. In terms of doing evaluation, he said that his methodology was to first quickly run an automated tool to check for errors. Then depending upon the number and kinds of errors, he would decide whether or not to follow up with a manual test. If the errors thrown up were few or nil, then he would do a manual test of some pages. However, if there were many errors and many of the errors were very basic ones like no alt text attributes, no headings, etc., then he may decide not to go ahead with the manual test at all.

Shadi also pointed out that sometimes it was possible that a website could be error free, except for one error, but if this one error was that the pay button on a shopping site was inaccessible, then the website would have to be evaluated as inaccessible since this was the most important button in the website rendering it usable or unusable. Shadi pointed that automated testing was critical to do large scale evaluations, but that this would only help in getting a quantitative analysis while aggregating results, whereas for a qualitative analysis, one would still have to do a manual test with users and experts and pay special attention to the kinds of pages which are selected for this type of test.

While highlighting the importance of manual testing, Srinivasu pointed out that although an automated tool could tell you whether or not an alt attribute was present, it could not determine whether that attribute was the appropriate one. When asked to share his impression on the common inaccessible features on government websites which he has been testing in large numbers over the past few weeks, he said that he found a lot of errors which were very basic, like no headings, no alt attributes, table based layouts, missing key board functionality for drop down menus, dynamic web sites which used Java script and Ajax instead of Aria and so on.

Glenda walked the audience through the methodology which she used for evaluating a single client’s website, how she used manual testing to do a baseline accessibility survey of the Texas University website and then used different tools to test different things, for example, desktop tools like Fire Eyes and accessibility tools for testing page by page. Glenda also talked about the importance of testing authoring tools, producing enterprise accessibility report, code validation, and accessibility validators to test with assistive technology. Glenda concurred with the other speakers that accessibility evaluation and monitoring should be at all stages of the website’s development life cycle — accessibility at the design stage, testing and mediating during development to ensure that it continues to remain accessible, because a lot of websites start out by being accessible, but lose accessibility somewhere along the way and finally test for monitoring accessibility of the website.

Some other issues which were discussed were the importance of user level for determining accessibility and choice of users, evaluation methodology to include reporting of minor changes in order to allow for monitoring of progress even if it is on a small scale, need for testers to think from every person and every device perspective, doing component and template testing for new websites as a good way to check for accessibility and the importance of aggregation and report writing. Overall there was a consensus amongst speakers that any effective and credible evaluation methodology, especially for large scale evaluation, would involve a mix of automated and manual testing with users and experts and would have to be done at every stage of development and maintenance of a website.

Mr Srinivasu Chakravarthula's presentationhttp://learnaccessibility.org/2011/04/methodologies-for-accessibility-eveluation/