TFJS - How to create model for custom word(Speech commands model)
See original GitHub issueTo get help from the community, we encourage using Stack Overflow and the tensorflow.js
tag.
TensorFlow.js version
Node version : V12.4.0
Browser version
Describe the problem or feature request
I used the audio_model which has been given in https://github.com/tensorflow/tfjs-models/tree/master/speech-commands. Followed the document and train the model with custom word like (wakeup) with existing dataset and saved the model. 1 . Create_model wakeup up down left right 2. loaded the dataset 3. train 100 4. save_model
Model json and weights.bin got generated then i imported the model in js but its not detecting any word.
Please suggest how to train custom word and how much training epoch required.
Code to reproduce the bug / link to feature request
Issue Analytics
- State:
- Created 4 years ago
- Comments:11 (2 by maintainers)
Top Results From Across the Web
How do I build a custom voice recognition model for multiple ...
And then select audio project. If you like what it trains in browser you can click download on top right and save the...
Read more >Speech Command Recognition With Tensorflow.JS and React ...
JS Speech Command Recognition 2. Build a React Web App that leverages the model 3. Displaying speech commands to the screen Get the...
Read more >TensorFlow.js - Audio recognition using transfer learning
First, you will load and run a pre-trained model that can recognize 20 speech commands. Then using your microphone, you will build and...
Read more >Speech Recognition with Tensorflow.js | by Benson Ruan
In this article, I will explain how to use a pre-trained TensorFlow.js model for speech recognition, and build an application which can ...
Read more >Saving and Uploading a TensorFlow.js Speech model - Medium
The below code creates a transfer model and trains on few custom words. import * as speechCommands from '@tensorflow-models/speech-commands' ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Like others on this thread, I’m also still unclear about step 3 on this README:
Could someone please provide additional details on that step of training? Thanks so much.
I’m also interested in figuring out how to train a model that can later be loaded in the browser. I was able to train and save model following the README in the training/soft-fft directory, though it appears that functionality is not yet supported by speechCommands. I looked into training/browser-fft but there appears to be a missing step:
Anywhere you can point me to figure the best way to run the WebAudio FFT on the processed files?