question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Enable reviewOnFail for axe.run()

See original GitHub issue

A reviewOnFail option was added in https://github.com/dequelabs/axe-core/issues/2186 This option works with axe.configure, but does not work with axe.run().

Expectation: The reviewOnFail option should be honored when passed as a rule configuration to axe.run()

Actual: axe.run()does not honor/acknowledge the reviewOnFail option. reviewOnFail can only be enabled by running axe.configure().

Motivation: Axe rules occasionally need to be configurable for test steps that walk through a functional workflow. A test step might disclose content with an accessibility issue that needs review. Rather than use configure for the rule for the entire test run it is only necessary/wanted to configure the rule for that step in the test, similar to enable/disable.

Requesting that axe.run() accept reviewOnFail as an option for rules.

const options = { rules: { 'color-contrast': { reviewOnFail: true } } }; // Currently rules can only be enabled/disabled

axe.run(context, options);

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:10 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
StephenEssercommented, Oct 29, 2020

I’ll add a little more background information. For now though, I think the global reviewOnFail (axe.configure) is going to work for our use-case, but I think this feature is worth discussing a little more.

I work on a team that designs reusable web components and an automation testing infrastructure. Our testing infrastructure is built using WebdriverIO and integrates axe accessibility testing and screenshot testing. Our automation tests are often used to run full functional application workflows and test for accessibility (and screenshot changes) at different stages in the test run.

An example of that might look something like this:

Note Terra.validates.accessibility(); runs axe.run()

describe('Form', () => {
  it('should fill out and submit a form', () => {
    // Navigate to the example form in the web browser
    browser.url('/example-form.html');

    // Run axe (and usually take a screenshot for regression testing) to verify the default form passes accessibility.
    Terra.validates.accessibility();

    // Focus an input.
    browser.click('#input');

    // Type into the input.
    browser.keys('Foo Bar');

    // Run axe (and usually take a screenshot for regression testing) to verify the focused input with text passes accessibility.
    Terra.validates.accessibility();

    // Submit the form.
    browser.click('#submit');

    // Run axe (and usually take a screenshot for regression testing) to verify the submitted form passes accessibility.
    // This step may also test the accessibility of a form that failed validation on submit and is displaying error indicators (text, red outlines, etc..).
    Terra.validates.accessibility();
  });
});

These tests often involve disclosing content or progressing through workflows and at various stages of the workflow we want to take a screenshot and validate accessibility. Accessibility violations fail the test run. An assertion is run on the result of axe.run. If the violation array contains any results we fail the test and log the violations to the test output.

Fast forward. Axe occasionally introduces new rules in minor version bumps. These new rules generally result in violations. For this reason we lock axe into a specific version and update to new versions only after investigating the changes. This is where reviewOnFail has come in handy. When axe introduces new rules we can mark the new rules to reviewOnFail. This allows us to run the rules, but not report them as violations and fail the test. Instead, they are reported as “incomplete” and logged to the test output at the end of the test run. This allows us to upgrade axe without breaking existing tests, but also report and track rules that were reported as incomplete. (New rules eventually become failures, but start as warnings for passivity).

reviewOnFail also comes in handy when adopting new tag standards. When a new tag is adopted each of the rules can be marked as reviewOnFail to report the new rule failures as warnings to the test output without failing the test run. After the adoption period has matured, the new rules can be changed to full violations.

Axe allows individual rules to be completely disabled per axe run:

axe.run({ rules: 'color-contrast': { enabled: false } });

Rather than disabling the rule completely, marking it for review would allow the incomplete rules to be reported at the end of the test run:

axe.run({ rules: 'color-contrast': { reviewOnFail: true } });
0reactions
Capocacciacommented, Jun 27, 2022

Thanks Ill make a PR from my fork.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Axe API Documentation
The axe API is designed to be an improvement over the previous generation of accessibility APIs. Click to explore the API's capabilities now....
Read more >
How to use the axe-core.run function in ...
To help you get started, we've selected a few axe-core.run examples, based on popular ... { axe.configure(config); } return axe .run( element ||...
Read more >
axe-core | Yarn - Package Manager
Axe is an accessibility testing engine for websites and other HTML-based user interfaces. It's fast, secure, lightweight, and was built to seamlessly ...
Read more >
Testing accessibility with Storybook
Storybook's new accessibility add-on enables automated ... The accessibility add-on runs the deque axe accessibility testing tool on each ...
Read more >
Chromatic
yarn add @storybook/addon-a11y --dev ... When Axe reports accessibility violations in stories, there are multiple ways to handle these failures depending on ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found