question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error: The first argument must be a Cognitive Services audio stream

See original GitHub issue

Hello,

I am facing issues while trying to use cognitive service for QnA Azure chatbot. I can get answers if i type question. But when I try to get answer using microphone the bot crashes immediately after returning the text spoken at microphone. Same behavior on chrome or firefox browser on desktop as well as Safari browser on IOS.

These are the two errors I see on the console of chrome browser.

webchat.js:2 Error: The first argument must be a Cognitive Services audio stream.
    at new t (webchat.js:2)
    at t.default (webchat.js:2)
    at webchat.js:2
    at Object.useMemo (webchat.js:2)
    at useMemo (webchat.js:2)
    at e (webchat.js:2)
    at Ji (webchat.js:2)
    at webchat.js:2
    at Ha (webchat.js:2)
    at Wa (webchat.js:2)
la @ webchat.js:2
webchat.js:2 WebSocket is already in CLOSING or CLOSED state.

I am using very simple code for this

 (async function() {
          const adapters = await window.WebChat.createDirectLineSpeechAdapters({
           fetchCredentials: {
             region: 'eastus',
             subscriptionKey: 'MY_SPEECH_SUBSCRIPTION_KEY'
          }
         });
        // Pass the set of adapters to Web Chat.
        window.WebChat.renderWebChat(
          {
            ...adapters
          },
          document.getElementById('webchat')
        );

        document.querySelector('#webchat > *').focus();
      })().catch(err => console.error(err));

Any help on this is much appreciated

Just to add to it, if I check activity logs for my speech service I see that my requests are successfully processed, that means I think speech to text is working fine. But somewhere while sending this text to my QnA chat bot to get answer it is failing. Although I might be completely wrong about it.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:16 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
stevkancommented, Apr 22, 2020

@shakil-san, the code snippet I provided would be placed (or the existing code updated) in the bot’s files. The code you referenced above is the code for the Web Chat client which is just the interface between your bot and any user.

If you don’t know already, you can access your bot’s files on Azure. Simply login, locate the resource group for the QnA bot you created, navigate to the bot registration (i.e. Web App Bot), click the “Build” blade, and from there you can download the bot’s source code.

With the bot source code, it is possible to run locally and test before redeploying. If you are confident with any changes, there is a redeploy script that will push your changes to Azure.

The second attached image is from the web app online editor, but you should see a file structure similar to this. You can see there are sendActivity() functions used. It is these you would apply any speak properties to. The above code snippet should be used as a reference on how to structure the activity. For example, using lines starting at 116, the code might be updated to the following. Keep in mind, you may need to make adjustments to meet your needs.

116    var message = QnACardBuilder.GetSuggestionCard(suggestedQuestions, qnaDialogResponseOptions.activeLearningCardTitle, qnaDialogResponseOptions.cardNoMatchText);
117    message.speak = "Here are some suggested questions for you to consider.";
118    await stepContext.context.sendActivity(message, "Here are some suggested questions for you to consider.");

image

image

This should be enough to get you setup. I’m going to close this issue as resolved. If, however, you continue to experience issues / errors around this, please feel free to re-open. However, any “How to” questions should be posted on Stack Overflow.

1reaction
corinagumcommented, Apr 3, 2020

@shakil-san, when your bot sends an activity back to DLS, is it setting the speak tag on the activity?

In sample 11, the bot sends an activity without first setting the speak tag:

const qnaResults = await this.qnaMaker.getAnswers(context);

// If an answer was received from QnA Maker, send the answer back to the user.
if (qnaResults[0]) {
    // Passing in a string does not result in an activity with a speak tag
    await context.sendActivity(qnaResults[0].answer);

(link)

When DLS receives an activity without a speak tag, it will not generate an audio stream to send to the client, which is the cause of the error.

Changing the code to the following should address the issue:

const qnaResults = await this.qnaMaker.getAnswers(context);

// If an answer was received from QnA Maker, send the answer back to the user.
if (qnaResults[0]) {
    await context.sendActivity({ text: qnaResults[0].answer, speak: qnaResults[0].answer });

Please try this and let us know if it works.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Error: The first argument must be a Cognitive Services audio ...
Hello,. I am facing issues while trying to use cognitive service for QnA Azure chatbot. I can get answers if i type question....
Read more >
Troubleshoot the Speech SDK - Azure Cognitive Services
The error most likely occurs because no audio data is being sent to the service. This error also might be caused by network...
Read more >
Azure Cognitive Services (Text to Speech) and Audio Issue in ...
Ok, I found the issue here. As I was checking the Text-To-Speech API docs further, I saw there is one output parameter in...
Read more >
"Easy" Computer Speech Recognition with Azure Cognitive ...
"Easy" Computer Speech Recognition with Azure Cognitive Services and Python. 4.9K views 1 year ago. Eli the Computer Guy.
Read more >
[FREE] AzSpeech plugin: Text-to-Speech, Speech-to-Text and ...
About A plugin integrating Azure Speech Cognitive Services to Unreal ... USP error: timeout waiting for the first audio chunk · Issue #1692 ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found