tfjs - trouble running speech commands in React Native CLI app
See original GitHub issueTo get help from the community, we encourage using Stack Overflow and the tensorflow.js
tag.
TensorFlow.js version
“@tensorflow-models/speech-commands”: “0.4.2”, “@tensorflow/tfjs”: “1.5.2”, “@tensorflow/tfjs-react-native”: “0.2.3”,
Browser version
“react-native”: “0.61.5”,
Describe the problem or feature request
I’m having a lot of trouble loading @tensorflow-models/speech-commands
as it seems to be missing dependencies. I’m fairly certain I initialized the project correctly according to the setup guide
Code to reproduce the bug / link to feature request
I initialized the project with React Native CLI
$ npx react-native init [projectName] --template react-native-template-typescript
and followed https://github.com/tensorflow/tfjs/tree/master/tfjs-react-native#setting-up-a-react-native-app-with-tfjs-react-native
After the setup the app runs fine on both Android and iOS, however, as soon as I try to load @tensorflow-models/speech-commands
I get problems:
First: error: bundling failed: Error: Unable to resolve module
utilfrom
node_modules/@tensorflow-models/speech-commands/dist/browser_fft_utils.js: util could not be found within the project.
Fixed with yarn add util
Second: error: bundling failed: Error: Unable to resolve module
fsfrom
node_modules/@tensorflow-models/speech-commands/dist/browser_fft_utils.js: fs could not be found within the project.
This one I have not been able to resolve, and it seems to be triggered by the line: recognizer = speechCommands.create('BROWSER_FFT', 'directional4w');
The closest issue I found to this was from #1682 in an expo-cli app, and I tried the tip from https://github.com/tensorflow/tfjs/issues/1682#issuecomment-534231135 where I require('@tensorflow-models/speech-commands')
after await tf.ready();
without luck.
I’m a bit confused as the naming of the files implies that it runs in the browser, however both util
and fs
are node.js modules. I’m expecting there’s some polyfilling needed to get this running properly.
Here’s the relevant code:
import React, {useEffect} from 'react';
import {View, Text} from 'react-native';
import {Audio} from 'expo-av';
import * as tf from '@tensorflow/tfjs';
import '@tensorflow/tfjs-react-native';
import * as speechCommands from '@tensorflow-models/speech-commands';
export default function SpeechCommands() {
useEffect(() => {
let recognizer: speechCommands.SpeechCommandRecognizer | undefined;
const start = async () => {
await tf.ready();
// the following line throws an error:
recognizer = speechCommands.create('BROWSER_FFT', 'directional4w');
};
start();
return () => {
if (recognizer) {
recognizer.stopListening();
}
};
}, []);
return (
<View>
<Text>SpeechCommands component</Text>
</View>
);
}
Issue Analytics
- State:
- Created 4 years ago
- Comments:13 (3 by maintainers)
Top GitHub Comments
Sounds good. To your point 3, transfer learning is what you would want for specific user adaptation. Point 2 may or may not use transfer learning. However you might be able to get pretty far even without transfer learning. I can’t really comment on the battery side. All the best with your project!
@jamesbalcombe83 Hey, I explained my thoughts in the above comment. I wanted to use RN, but there are no good solutions. If you just have a couple commands, Teachable Machine is a good idea. Tensorflow also has packages that let you use tflite models. Here is the link that I used. https://teachablemachine.withgoogle.com/train/audio
Here is the other link I looked at: https://github.com/tensorflow/examples/tree/master/lite/examples/sound_classification
If you want RN, you are going to have to write the packages yourself. I am debating creating an opensource project for it
Cheers