Enable reviewOnFail for axe.run()
See original GitHub issueA reviewOnFail
option was added in https://github.com/dequelabs/axe-core/issues/2186 This option works with axe.configure
, but does not work with axe.run()
.
Expectation: The reviewOnFail
option should be honored when passed as a rule configuration to axe.run()
Actual: axe.run()
does not honor/acknowledge the reviewOnFail
option. reviewOnFail
can only be enabled by running axe.configure()
.
Motivation: Axe rules occasionally need to be configurable for test steps that walk through a functional workflow. A test step might disclose content with an accessibility issue that needs review. Rather than use configure
for the rule for the entire test run it is only necessary/wanted to configure the rule for that step in the test, similar to enable/disable
.
Requesting that axe.run() accept reviewOnFail
as an option for rules.
const options = { rules: { 'color-contrast': { reviewOnFail: true } } }; // Currently rules can only be enabled/disabled
axe.run(context, options);
Issue Analytics
- State:
- Created 3 years ago
- Comments:10 (8 by maintainers)
Top GitHub Comments
I’ll add a little more background information. For now though, I think the global
reviewOnFail
(axe.configure) is going to work for our use-case, but I think this feature is worth discussing a little more.I work on a team that designs reusable web components and an automation testing infrastructure. Our testing infrastructure is built using WebdriverIO and integrates axe accessibility testing and screenshot testing. Our automation tests are often used to run full functional application workflows and test for accessibility (and screenshot changes) at different stages in the test run.
An example of that might look something like this:
Note
Terra.validates.accessibility();
runsaxe.run()
These tests often involve disclosing content or progressing through workflows and at various stages of the workflow we want to take a screenshot and validate accessibility. Accessibility violations fail the test run. An assertion is run on the result of axe.run. If the violation array contains any results we fail the test and log the violations to the test output.
Fast forward. Axe occasionally introduces new rules in minor version bumps. These new rules generally result in violations. For this reason we lock axe into a specific version and update to new versions only after investigating the changes. This is where
reviewOnFail
has come in handy. When axe introduces new rules we can mark the new rules toreviewOnFail
. This allows us to run the rules, but not report them as violations and fail the test. Instead, they are reported as “incomplete” and logged to the test output at the end of the test run. This allows us to upgrade axe without breaking existing tests, but also report and track rules that were reported as incomplete. (New rules eventually become failures, but start as warnings for passivity).reviewOnFail
also comes in handy when adopting new tag standards. When a new tag is adopted each of the rules can be marked asreviewOnFail
to report the new rule failures as warnings to the test output without failing the test run. After the adoption period has matured, the new rules can be changed to full violations.Axe allows individual rules to be completely disabled per axe run:
Rather than disabling the rule completely, marking it for review would allow the incomplete rules to be reported at the end of the test run:
Thanks Ill make a PR from my fork.