question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Scores of Listen and Recognize are different in speech-commands

See original GitHub issue

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow.js):No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows10
  • TensorFlow.js installed from (npm or script link): npm
  • TensorFlow.js version (use command below): 2.8.2
  • Browser version: Google Chrome. Version 87.0.4280.88 (Official Build) (x86_64)

Describe the current behavior I have a recognizer and used its listen function to get the Float32Array array from SpeechCommandRecognizerResult.spectrogram.data, then concatenated the array using concatenateFloat32Arrays util function

When using listen function the scores were fine as expected, but when I use recognize function and provide the concatenated Float32Array as an input, the scores are different and not as expected

Describe the expected behavior Expected to have the same or at least similar scores when using recognize function with input parameter that I got from listen function

Standalone code to reproduce the issue

Example in Codesandbox

Other info / logs Include any logs or source code that would be helpful to

When I use the Float32Array that I got from listen function and use it as a parameter input in recognize function, the score is different and the score of the second label is almost always zero

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5

github_iconTop GitHub Comments

2reactions
pyu10055commented, Jan 29, 2021

@adotnusiyan I think you need to normalize the spectrogram before passing to the recognizer: take a look at this line , the normalization method is not exposed, but you can take the source code and try it out. Please let us know if that solve your problem. thanks.

1reaction
pyu10055commented, Jan 29, 2021

@adotnusiyan btw, the normalization method is exposed in the latest speech-commands v0.5.1

Read more comments on GitHub >

github_iconTop Results From Across the Web

Tensorflow speech-commands Scores of Listen and ...
I have a recognizer and used its listen function to get the Float32Array array from SpeechCommandRecognizerResult.spectrogram.data, ...
Read more >
Deep Learning For Audio With The Speech Commands Dataset
Here, we train a very simple model on the Speech Commands audio ... I clicked through the various folders in Nautilus and listened...
Read more >
@tensorflow-models/speech-commands - npm
To use the speech-command recognizer, first create a recognizer instance, then start the streaming recognition by calling its listen() method.
Read more >
Simple audio recognition: Recognizing keywords - TensorFlow
This tutorial demonstrates how to preprocess audio files in the WAV format and build and train a basic automatic speech recognition (ASR) model...
Read more >
tfjs-listen-recognize-scores - CodeSandbox
tfjs-listen-recognize-scores ... @tensorflow-models/speech-commands ... VS Code's tsserver was deleted by another application such as a misbehaving virus ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found