Error: The first argument must be a Cognitive Services audio stream
See original GitHub issueHello,
I am facing issues while trying to use cognitive service for QnA Azure chatbot. I can get answers if i type question. But when I try to get answer using microphone the bot crashes immediately after returning the text spoken at microphone. Same behavior on chrome or firefox browser on desktop as well as Safari browser on IOS.
These are the two errors I see on the console of chrome browser.
webchat.js:2 Error: The first argument must be a Cognitive Services audio stream.
at new t (webchat.js:2)
at t.default (webchat.js:2)
at webchat.js:2
at Object.useMemo (webchat.js:2)
at useMemo (webchat.js:2)
at e (webchat.js:2)
at Ji (webchat.js:2)
at webchat.js:2
at Ha (webchat.js:2)
at Wa (webchat.js:2)
la @ webchat.js:2
webchat.js:2 WebSocket is already in CLOSING or CLOSED state.
I am using very simple code for this
(async function() {
const adapters = await window.WebChat.createDirectLineSpeechAdapters({
fetchCredentials: {
region: 'eastus',
subscriptionKey: 'MY_SPEECH_SUBSCRIPTION_KEY'
}
});
// Pass the set of adapters to Web Chat.
window.WebChat.renderWebChat(
{
...adapters
},
document.getElementById('webchat')
);
document.querySelector('#webchat > *').focus();
})().catch(err => console.error(err));
Any help on this is much appreciated
Just to add to it, if I check activity logs for my speech service I see that my requests are successfully processed, that means I think speech to text is working fine. But somewhere while sending this text to my QnA chat bot to get answer it is failing. Although I might be completely wrong about it.
Issue Analytics
- State:
- Created 3 years ago
- Comments:16 (8 by maintainers)
Top GitHub Comments
@shakil-san, the code snippet I provided would be placed (or the existing code updated) in the bot’s files. The code you referenced above is the code for the Web Chat client which is just the interface between your bot and any user.
If you don’t know already, you can access your bot’s files on Azure. Simply login, locate the resource group for the QnA bot you created, navigate to the bot registration (i.e. Web App Bot), click the “Build” blade, and from there you can download the bot’s source code.
With the bot source code, it is possible to run locally and test before redeploying. If you are confident with any changes, there is a redeploy script that will push your changes to Azure.
The second attached image is from the web app online editor, but you should see a file structure similar to this. You can see there are
sendActivity()
functions used. It is these you would apply any speak properties to. The above code snippet should be used as a reference on how to structure the activity. For example, using lines starting at 116, the code might be updated to the following. Keep in mind, you may need to make adjustments to meet your needs.This should be enough to get you setup. I’m going to close this issue as resolved. If, however, you continue to experience issues / errors around this, please feel free to re-open. However, any “How to” questions should be posted on Stack Overflow.
@shakil-san, when your bot sends an activity back to DLS, is it setting the
speak
tag on the activity?In sample 11, the bot sends an activity without first setting the
speak
tag:(link)
When DLS receives an activity without a speak tag, it will not generate an audio stream to send to the client, which is the cause of the error.
Changing the code to the following should address the issue:
Please try this and let us know if it works.