Adaptive card with controls not readable to screen reader
See original GitHub issueWe are using webchat-es5.js for our Virtual assistant and some of the customers are using screen reader to access it. The screen reads the normal card and user messages properly. But for Adaptive card with controls, it doesn’t read the content.
Screenshots
Version
webchat es5 version 4.12.0 https://cdn.botframework.com/botframework-webchat/4.12.0/webchat-es5.js Integrating the CDN version js directly on the page using script tag (No Iframe). Desktop Chrome Browser.
<meta name="botframework-directlinespeech:version" content="4.12.0">
<meta name="botframework-webchat:bundle:variant" content="full-es5">
<meta name="botframework-webchat:bundle:version" content="4.12.0">
<meta name="botframework-webchat:core:version" content="4.12.0">
<meta name="botframework-webchat:ui:version" content="4.12.0">
Describe the bug
We are using webchat-es5.js for our Virtual assistant and some of the customers are using screen reader to access it. The screen reader reads normal card text and user input text. But if Bot response is adaptive card with controls, then it doesn’t read the card content. • On a Virtual Agent message/response with no links or click-able buttons, assistive technology like screen readers can read aloud the full message when in focus. Please see the code example below: Refer Screenshot 1 above
- A typical message in the Virtual Agent (no links/buttons) is arranged in an unordered list < ul > of messages. One message/response is one list item < li >.
- The message to be spoken is within the < article > tag which contains paragraph tags < p > with one of them carrying the actual Virtual Agent response.
- The “aria-labelledby” attribute in the list item tag < li > for the message is looking at the “id” attribute of the < article > tag for the message. This programmatically associates the Virtual Agent message with the control. This approach is working normally and there is no issue. However when the Virtual Agent adds links or buttons to a response, things do not work as expected…
Refer Screenshot 2 above
• The “aria-labelledby” attribute for the list item (for the message) is still pointing to the “id” within the < article > tag, but the only information within that that tag is: o “Bot CB said:” o “1 attachment.” o The date stamp o and “Press ENTER to interact” When we use the mouse or keyboard to focus that list item for the message, the 4 sub-items mentioned mentioned are the only information available to assistive technology. The paragraph with the actual Virtual Agent response is much lower in the DOM (visible in the previous example) and since it isn’t programmatically associated with the list item via “aria-labelledby” or any equivalent method like in the first example, it isn’t available to be read from assistive technology like screen readers. Low vision or other users relying a screen reader will not know what to focus and interact with as currently configured. This MUST be corrected.
Steps to reproduce
- Deploy Web App Bot with adaptive cards.
- Start NVDA screen reader or JAWS 2021 screen reader.
- Open the web app bot on browser.
- Interact with the bot till the adaptive card response arrives.
- Observe how the screen reader reads the bot response.
Expected behavior
Using ARIA to programmatically associate the Virtual Agent response text with the list item for the message or moving the message text into the <article> tag itself somewhere so it is available to assistive technology like screen readers will correct the issue.
Please advice if any other fix to make the screen reader to read the adaptive card content. [Bug]
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:5 (3 by maintainers)
Top GitHub Comments
The issue remains even after change the webchat version to 4.13.0. Will share the Adaptive card json.
Note: same behavior as explained in https://github.com/microsoft/BotFramework-WebChat/issues/3667