Skip To Main Content

Interoperability Testing for Assistive Technologies and The Web Platform

Posted by Seth Thompson

May 21 2020

In 2018, we began contributing to ecosystem infrastructure for ARIA, the accessibility API for the web platform, with a project focused on regression testing for example patterns in the ARIA Practices Guide (APG). Since then, we’ve been working on writing new guidelines for the APG directly, which we continue to maintain with funding from Facebook Accessibility.

We see assistive technology (AT) development as key to supporting inclusion and justice for marginalized communities. The APG is a great resource for web developers to learn accessible interaction design patterns. We are honored to have the opportunity to learn from accessibility experts in the ARIA community, as we contribute to maintain this resource. We also see an opportunity within the ARIA ecosystem to use our skills and experience in web standards interoperability testing to improve access to the web.

Our latest work focuses on ARIA-AT (ARIA and Assistive Technologies), a new interoperability test suite for web browsers and ATs, like screen readers. ARIA-AT builds on our previous experience testing ECMAScript and the web platform with test262.report and wpt.fyi, combined with expert web accessibility knowledge from the W3C ARIA-AT community group.

Today, ATs have an interoperability gap on the web: they don’t render ARIA in a consistent enough way to deliver reliable experiences to AT users. To address this, ARIA-AT tests how ATs render ARIA, which enables us to measure AT interoperability. The project is starting with manual testing of APG patterns with a few popular desktop screen readers, but plans to expand coverage to mobile screen readers, other types of ATs, and native HTML elements, as well as develop new ways of automating tests.

ARIA-AT provides a framework and a suite of tests. The framework displays an accessible widget from the ARIA APG to a tester, instructs the tester to perform a set of commands, and asks if the screen reader output meets the expectations defined in the test suite. By running these tests, we can measure interoperability across different browser and screen reader combinations.

Matt King, Chair of the ARIA AT CG, suggests we think about assistive technologies like any other user interface rendering technology. As he puts it “consistent user interface rendering across platforms is the backbone of software engineering. Without consistently rendered UIs, we can’t make usable software.” Just as browsers require interoperability testing to ensure that they visually render CSS alike (e.g. wpt/css), so too do screen readers and browsers need testing to ensure that they sonically and tactilely render ARIA consistently. Similar guarantees are necessary for every other kind of AT as well. Ensuring interoperable ATs is perhaps even more critical than ensuring interoperable CSS.

The ARIA community has an incredible group of web accessibility experts who have been working diligently since 2007 to give us ways to express the semantics of any user interface in accessibility APIs. With the ARIA Authoring Practices, we now also have a mature perspective on using those semantics in accessible interaction designs for the web. With this foundation, we can now build testing tools to help ensure that web browsing is consistent, interoperable, and correct across different combinations of websites, browsers, and ATs. This is finally possible because the ARIA-AT Community Group is providing a forum for assistive technology vendors to build consensus on how their products should interpret ARIA.

Over the past six months, we’ve worked with browser vendors, screen reader vendors, tech companies and universities through the W3C ARIA-AT Community Group (CG) to create the assertion model and test harness, establish test patterns, and write the first tests that reduce this consensus to repeatable workflows. We’re now working on a manual testing tool to coordinate and run these tests, more on that soon.

As of today, the community group has written tests for the checkbox, combobox, and menubar patterns. There are more tests on the way for grid, slider and the rest of the example patterns in the APG (totaling 25 examples that cover 44 ARIA roles and 31 ARIA properties, at time of this writing).

As the project ramps up, we can use your help! Is there a particular AT interop bug that grinds your gears? Is there some ARIA feature, that even when implemented correctly by a web developer leads to confusingly different results in different screen readers? Tweet us about it at @bocoup, we’re collecting pain points like this, and the more examples we can cite to build awareness about the AT interoperability gap, the better.

If designing and writing interoperability tests for accessibility sounds exciting, join the W3C ARIA-AT CG or email us to get involved. We’re also looking for expert screen reader users to help us author tests, and for testers to run the tests using your daily screen reader.

We’re planning to share more news soon about the first results from the initial ARIA-AT test cycle, coming this summer. Stay tuned for updates!

Comments

We moved off of Disqus for data privacy and consent concerns, and are currently searching for a new commenting tool.

Contact Us

We'd love to hear from you. Get in touch!