This was the third consecutive year that I have served as a delegate to the annual meeting of the Internet Governance Forum (IGF), a United Nations multi-stakeholder forum for policy dialogue on issues of Internet governance. The theme for this year’s forum was “Internet Governance for Sustainable Human, Economic and Social Development”. This was my first time attending as a representative of AvePoint, and it was fitting because my role at AvePoint is as a research scientist related to applied engineering and standards specifically dealing with governance, risk, and compliance. My fellow computer scientists, software engineers, and subject matter experts all work collaboratively to constantly advance our solutions and to better protect our customers from many different risk factors. Often the risks are not traditional risks, and in most cases there are complex issues that must be dealt with promptly. We work daily in developing ways to identify, prioritize and remediate issues related to standards compliance.
In working with our Labs group, AvePoint Labs, we have come up with testing languages and technologies that peel back the different layers of structured and unstructured data to provide customers with real information on how to set and address compliance priorities. However, neither Labs nor technology alone can solve the problem. We need more information and flexibility if we are to crack this nut, and for this we developed the AvePoint Testing Language (ATL). The ATL provides a framework to test and classify content related to compliance to standards or guidelines, including but not limited to security, privacy, accessibility, and content classification. Because it is open and modifiable, the ATL allows for coverage to specifically match an organization’s needs, for example:
·
The Word Wide Web Consortiums (W3C) Web Content Accessibility Guidelines (WCAG)
·
US Section 508
·
The W3C Accessible Rich Internet Applications Suite (ARIA)
·
mobileOK and Mobile Web Best Practices
· and multiple variations of the above guidelines or standards
At this year’s IGF meeting, a main topic of discussion was access to information and the accessibility of the same information. I was a part of many conversations related to the WCAG 2.0 “Guidelines”. It was brought up that the European Union uses WCAG 2.0 as its baseline. For more information, see
this page on the European Commission website.
In the discussions, everyone always came back to the same problem: The vast number of issues with electronic content made it hard to set priorities. In one discussion, a delegate from Turkey even brought up the question of, “How do we as practitioners differentiate framework from content?” This is a huge question, and everyone in the room nodded or gestured in agreement. Let’s look at the question and at WCAG 2.0 specifically. At its core, WCAG 2.0 was developed to achieve some goals or, better put, to put forward some principles as related to accessibility and content.
Basically, content needs to be:
· Perceivable
· Operable
· Understandable
· Robust
So, in its simplest form, these are somewhat sense dependent. They clearly communicate that, in fact, the content must be in a form that can be understood regardless of physical ability. These same guidelines also have criteria that define how to be successful in achieving these guiding principles for many different content types. These success criterions and individual techniques will help you fix your documents. However, the world is not as simple as just a document. As we covered in discussions at the IGF, we also need to be concerned about content frameworks.
Content frameworks may include content management systems or custom applications. It is more common than not that there are content wrappers or multi-part documents versus simple static content. A good example would be a website template. The template may combine five files or objects to create one combined presentation. While this helps authors in delivering well-organized content without having to enter redundant information, it does present the problem of propagating errors across all content. However, if we take a narrow view, we can see that we must be able to test frameworks and also be able to filter out the same so we are left with just unique content. When we do this, we are left with a better picture of the accessibility of our content, and then we can address the accessibility without so many false positives. By false positive, we mean a reported accessibility error where one does not exist in the content but is in the framework.
Once we have defined our content, we are then able to continue on with testing for success or failure to comply with the guideline. What is exciting is that WCAG 2.0 was designed with the tester in mind. It is designed to be testable with automated testing and human evaluation, both types of testing being possible via
AvePoint Compliance Guardian. The human evaluation is a part of the process that can seem daunting to some testers. It is not always the testing that is hard, but rather it is finding the content that needs human evaluation that is hard. This is where automated testing can lend a hand, because a system can test for the existence of content and report back the location. With this information testers can simply evaluate only the content that needs special review versus looking at every piece of content!
Different content may require alternative techniques, and individual companies may deploy their own approach to techniques to be in compliance with the guidelines. In this situation, the automated accessibility testing tool cannot be a one-size-fits-all tool, and it has to lend itself to customization. This is where the AvePoint Testing Language comes in: The individual test definition files (TDFs) can be set up to match your internal standard that has been derived from the WCAG 2.0 Guideline. In the end, it is not as difficult as it seems on the surface. There are a few steps:
· Set up your automated testing system to match your environment and standards
· Use the automated testing system to test your framework and then repair issues
· Use the automated tester to validate that your content complies with organizational standards
· Use the automated tester to identify all content that requires human evaluation and distribute that list to your testers.
Once you have done this, you will have a core set of errors and combined with other Compliance Guardian data you will be able to assign priority to the errors that are specific to content and/or framework versus some error list that does not take into account repeated data/errors. From here, you can systematically step through the remediation of your site and content. You can even use snapshots of status and trend analysis to follow and report on your efforts. WCAG 2.0 made it easier to combine automated and human evaluation, and at AvePoint we are working daily to deliver the solutions that are needed to manage your accessibility program!
Cheers,
Rob
For more information:
· The European Union and Accessibility Guidelines check out the EU’s information providers guide:
http://ec.europa.eu/ipg/standards/accessibility/wcag-20/index_en.htm
· EU Top Ten golden rules:
http://ec.europa.eu/ipg/standards/accessibility/10_rules/index_en.htm
· WCAG 2.0 Guidelines and Techniques:
http://www.w3.org/WAI/intro/wcag.php
· AvePoint Compliance Guardian:
https://www.avepoint.com/compliance-guardian/