question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Adaptive card with controls not readable to screen reader

See original GitHub issue

We are using webchat-es5.js for our Virtual assistant and some of the customers are using screen reader to access it. The screen reads the normal card and user messages properly. But for Adaptive card with controls, it doesn’t read the content.

Screenshots

image

image

Version

webchat es5 version 4.12.0 https://cdn.botframework.com/botframework-webchat/4.12.0/webchat-es5.js Integrating the CDN version js directly on the page using script tag (No Iframe). Desktop Chrome Browser.

<meta name="botframework-directlinespeech:version" content="4.12.0">
<meta name="botframework-webchat:bundle:variant" content="full-es5">
<meta name="botframework-webchat:bundle:version" content="4.12.0">
<meta name="botframework-webchat:core:version" content="4.12.0">
<meta name="botframework-webchat:ui:version" content="4.12.0">

Describe the bug

We are using webchat-es5.js for our Virtual assistant and some of the customers are using screen reader to access it. The screen reader reads normal card text and user input text. But if Bot response is adaptive card with controls, then it doesn’t read the card content. • On a Virtual Agent message/response with no links or click-able buttons, assistive technology like screen readers can read aloud the full message when in focus. Please see the code example below: Refer Screenshot 1 above

  1. A typical message in the Virtual Agent (no links/buttons) is arranged in an unordered list < ul > of messages. One message/response is one list item < li >.
  2. The message to be spoken is within the < article > tag which contains paragraph tags < p > with one of them carrying the actual Virtual Agent response.
  3. The “aria-labelledby” attribute in the list item tag < li > for the message is looking at the “id” attribute of the < article > tag for the message. This programmatically associates the Virtual Agent message with the control. This approach is working normally and there is no issue. However when the Virtual Agent adds links or buttons to a response, things do not work as expected…

Refer Screenshot 2 above

• The “aria-labelledby” attribute for the list item (for the message) is still pointing to the “id” within the < article > tag, but the only information within that that tag is: o “Bot CB said:” o “1 attachment.” o The date stamp o and “Press ENTER to interact” When we use the mouse or keyboard to focus that list item for the message, the 4 sub-items mentioned mentioned are the only information available to assistive technology. The paragraph with the actual Virtual Agent response is much lower in the DOM (visible in the previous example) and since it isn’t programmatically associated with the list item via “aria-labelledby” or any equivalent method like in the first example, it isn’t available to be read from assistive technology like screen readers. Low vision or other users relying a screen reader will not know what to focus and interact with as currently configured. This MUST be corrected.

Steps to reproduce

  1. Deploy Web App Bot with adaptive cards.
  2. Start NVDA screen reader or JAWS 2021 screen reader.
  3. Open the web app bot on browser.
  4. Interact with the bot till the adaptive card response arrives.
  5. Observe how the screen reader reads the bot response.

Expected behavior

Using ARIA to programmatically associate the Virtual Agent response text with the list item for the message or moving the message text into the <article> tag itself somewhere so it is available to assistive technology like screen readers will correct the issue.

Please advice if any other fix to make the screen reader to read the adaptive card content. [Bug]

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:1
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
venkatx5commented, May 4, 2021

The issue remains even after change the webchat version to 4.13.0. Will share the Adaptive card json.

0reactions
corinagumcommented, May 11, 2021
Read more comments on GitHub >

github_iconTop Results From Across the Web

Speech and Advanced Customization - Adaptive Cards
The speak tag enables the adaptive card to be delivered to an environment where a visual display is not primary experience, such as...
Read more >
Action.ToggleVisibility - Schema Explorer | Adaptive Cards
Defines text that should be displayed to the end user as they hover the mouse over the action, and read when using narration...
Read more >
A certain adaptive card crashes my webchat - Stack Overflow
I'm currently working on making some dynamically generated cards, but for some reason when I try to add this one as an adaptive...
Read more >
Bountysource
Adaptive card with controls not readable to screen reader.
Read more >
JAWS® – Freedom Scientific
A screen reader is a software program that enables a blind or visually impaired user to read the text that is displayed on...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found