question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Using rasa-webchat with react-speech-recognition

See original GitHub issue

Hi, Is there currently any way to create a wrapper component around the rasa-webchat widget with react-speech-recognition that can send a payload to the widget on button press?

This is my WebChat component:

class WebChat extends Component
{
    constructor(props){
        super();
    }
    
    render(){
        return (
            <Widget
              initPayload={""}
              socketUrl={"http://localhost:5005"}
              socketPath={"/socket.io/"}
              inputTextFieldHint={"Enter a valid command"}
              customData={{"language": "en"}} // arbitrary custom data. Stay minimal as this will be added to the socket
              title={"Virtual Assistant"}
              showFullScreenButton={"true"}
              embedded={"true"}
              hideWhenNotConnected = {"false"}
              params={{"storage": "session"}}
            />
          );
    }
}
export default WebChat;

And here is my wrapper:

const ChatboxSpeech = () => {
  const { transcript, resetTranscript } = useSpeechRecognition()

  return (
    <div className = "chatroom">
      <WebChat/>
      <button onClick={SpeechRecognition.startListening}>Start</button>
      <button onClick={SpeechRecognition.stopListening}>Stop</button>
      <button onClick={resetTranscript}>Reset</button>
      <p>{transcript}</p>
    </div>
  )
}
export default ChatboxSpeech;

Just not sure on how to send the transcript to the Widget. Any help appreciated.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

4reactions
andrewlu0commented, Feb 3, 2021
import React, {useRef, useEffect} from 'react';
import {Widget} from 'rasa-webchat';
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'
import { FontAwesomeIcon } from '@fortawesome/react-fontawesome'
import { faMicrophone } from '@fortawesome/free-solid-svg-icons'

var listening = false;

function MyComponent() {
    const webchatRef = useRef(null);
    const { transcript, interimTranscript, finalTranscript, resetTranscript} = useSpeechRecognition()

    useEffect(() => {
      if (finalTranscript !== '') {
        callback();
        resetTranscript();
        stopListening();
        listening = false;
      }}
    );

    function handleListen(){
      if (!listening){
        listening = true;
        SpeechRecognition.startListening();
        isListening();
        setTimeout(() => { 
          if (finalTranscript === ''){
            SpeechRecognition.stopListening();
            stopListening();
            listening = false;
          }
        }, 5000);
      }
    }

    function isListening(){
      Array.from(document.querySelectorAll('.speech-button')).map(function(button) {
                 button.style.color="#6495ED";
                 button.style.border="1px solid white";
      })
    }

    function stopListening(){
      Array.from(document.querySelectorAll('.speech-button')).map(function(button) {
        button.style.color="white";
        button.style.border="1px solid grey";
      })
    }
    
    function callback() {
        if (webchatRef.current && webchatRef.current.sendMessage) {
          webchatRef.current.sendMessage(finalTranscript, finalTranscript);
        }
    }
    
    return (<div class = "chatroom">
      <Widget 
        initPayload={""}
        socketUrl={"http://localhost:5005"}
        socketPath={"/socket.io/"}
        inputTextFieldHint={"Enter a valid command"}
        customData={{"language": "en"}} // arbitrary custom data. Stay minimal as this will be added to the socket
        title={"Virtual Assistant"}
        showFullScreenButton={"true"}
        embedded={"true"}
        hideWhenNotConnected = {"false"}
        params={{"storage": "session"}}
      ref={webchatRef}/>
      <button class = "speech-button" onClick={handleListen}><FontAwesomeIcon icon={faMicrophone} /></button>
    </div>)
}

Here is a full example if you are interested

0reactions
hadisd5commented, May 15, 2021
import React, {useRef, useEffect} from 'react';
import {Widget} from 'rasa-webchat';
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'
import { FontAwesomeIcon } from '@fortawesome/react-fontawesome'
import { faMicrophone } from '@fortawesome/free-solid-svg-icons'

var listening = false;

function MyComponent() {
    const webchatRef = useRef(null);
    const { transcript, interimTranscript, finalTranscript, resetTranscript} = useSpeechRecognition()

    useEffect(() => {
      if (finalTranscript !== '') {
        callback();
        resetTranscript();
        stopListening();
        listening = false;
      }}
    );

    function handleListen(){
      if (!listening){
        listening = true;
        SpeechRecognition.startListening();
        isListening();
        setTimeout(() => { 
          if (finalTranscript === ''){
            SpeechRecognition.stopListening();
            stopListening();
            listening = false;
          }
        }, 5000);
      }
    }

    function isListening(){
      Array.from(document.querySelectorAll('.speech-button')).map(function(button) {
                 button.style.color="#6495ED";
                 button.style.border="1px solid white";
      })
    }

    function stopListening(){
      Array.from(document.querySelectorAll('.speech-button')).map(function(button) {
        button.style.color="white";
        button.style.border="1px solid grey";
      })
    }
    
    function callback() {
        if (webchatRef.current && webchatRef.current.sendMessage) {
          webchatRef.current.sendMessage(finalTranscript, finalTranscript);
        }
    }
    
    return (<div class = "chatroom">
      <Widget 
        initPayload={""}
        socketUrl={"http://localhost:5005"}
        socketPath={"/socket.io/"}
        inputTextFieldHint={"Enter a valid command"}
        customData={{"language": "en"}} // arbitrary custom data. Stay minimal as this will be added to the socket
        title={"Virtual Assistant"}
        showFullScreenButton={"true"}
        embedded={"true"}
        hideWhenNotConnected = {"false"}
        params={{"storage": "session"}}
      ref={webchatRef}/>
      <button class = "speech-button" onClick={handleListen}><FontAwesomeIcon icon={faMicrophone} /></button>
    </div>)
}

Here is a full example if you are interested

Thanks so much for sharing Is it possible to share the css file related to the chatroom class?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Rasa Webchat Integration - YouTube
Hi all,Today I will show you how to connect your Rasa bot to a webchat.first I will show how to connect using a...
Read more >
RASA - Giving voice to web chatbot - DEV Community ‍ ‍
In this article, we will add text-to-speech to the web chat application created in the previous post. There won't be anything new specific ......
Read more >
How to enable voice platform to webchat? - Rasa Open Source
I have a Microsoft speech service API. I want to integrate with the rasa-webchat for voice integration. how can i include that in...
Read more >
Using the React Speech Recognition Hook for voice assistance
React Speech Recognition is a React Hook that works with the Web Speech API to translate speech from your device's mic into text....
Read more >
Smart Chatbot Implementation using RASA | TO THE NEW Blog
Rasa Conversational AI assistant normally consists of two components and they are Rasa NLU and Rasa Core. Rasa NLU can be just treated...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found