toxicity model always returns the same values in React Native
See original GitHub issueTensorFlow.js version
Describe the problem or feature request
Regardless of the string I input into the toxicity model, I receive the same probabilities for each result (identity_attack, obscene, etc.) as shown in the screenshot below. I am calling this using the code below.
One thing to note: I am running this on an expo-managed app, but one of the dependencies of tfjs-react-native is react-native-fs which would normally be replaced by expo-file-system in my implementation. To satisfy the dependency requirement I installed react-native-fs anyways, but I am not sure if should matter as I am not trying to import any custom models from memory using tfjs-react-native.
Code to reproduce the bug / link to feature request
import * as tf from '@tensorflow/tfjs';
import '@tensorflow/tfjs-react-native';
var toxicity = require('@tensorflow-models/toxicity');
And I am calling the code in this way
async componentDidMount () {
await tf.setBackend('rn-webgl')
await tf.ready();
console.log("tf is ready!!!")
// Load the model. Users optionally pass in a threshold and an array of
// labels to include.
await toxicity.load(0.9)
.then(async(model) => {
await model.classify(['you suck','I love you'])
.then(async(predictions) => {
await console.log(predictions);
// `predictions` is an array of objects, one for each prediction head,
// that contains the raw probabilities for each input along with the
// final prediction in `match` (either `true` or `false`).
// If neither prediction exceeds the threshold, `match` is `null`.
});
});
}
Printing in predictions:
As seen in the output above, the toxicity model is returning the same probabilities regardless of the strings inputted. I also tried this with a variety of other very kind and very toxic strings which all resulted in the same output.
Any help addressing this issue would be greatly appreciated.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:5 (1 by maintainers)
Top GitHub Comments
Hi, I wasn’t able to reproduce this, could you provide more device information. Another thing to try for debugging this is tf.setBackend(‘cpu’).
I tried to reproduce this here, feel free to download that repo and try it out yourself (make sure to switch to the ‘toxicity-model’ branch.
This is the output I got for
model.classify(['you suck','I love you'])
Are you satisfied with the resolution of your issue? Yes No