question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

toxicity model always returns the same values in React Native

See original GitHub issue

TensorFlow.js version

image

Describe the problem or feature request

Regardless of the string I input into the toxicity model, I receive the same probabilities for each result (identity_attack, obscene, etc.) as shown in the screenshot below. I am calling this using the code below.

One thing to note: I am running this on an expo-managed app, but one of the dependencies of tfjs-react-native is react-native-fs which would normally be replaced by expo-file-system in my implementation. To satisfy the dependency requirement I installed react-native-fs anyways, but I am not sure if should matter as I am not trying to import any custom models from memory using tfjs-react-native.

Code to reproduce the bug / link to feature request

import * as tf from '@tensorflow/tfjs';
import '@tensorflow/tfjs-react-native';
var toxicity = require('@tensorflow-models/toxicity');

And I am calling the code in this way

async componentDidMount () { 
  await tf.setBackend('rn-webgl')
  await tf.ready();
  console.log("tf is ready!!!")
  // Load the model. Users optionally pass in a threshold and an array of
  // labels to include.
  await toxicity.load(0.9)
    .then(async(model) => {
      await model.classify(['you suck','I love you'])
        .then(async(predictions) => {

            await console.log(predictions);

            // `predictions` is an array of objects, one for each prediction head,
            // that contains the raw probabilities for each input along with the
            // final prediction in `match` (either `true` or `false`).
            // If neither prediction exceeds the threshold, `match` is `null`.
        
        });
  });
}

Printing in predictions:

image image image

As seen in the output above, the toxicity model is returning the same probabilities regardless of the strings inputted. I also tried this with a variety of other very kind and very toxic strings which all resulted in the same output.

Any help addressing this issue would be greatly appreciated.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:2
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
tafsiricommented, Sep 29, 2020

Hi, I wasn’t able to reproduce this, could you provide more device information. Another thing to try for debugging this is tf.setBackend(‘cpu’).

I tried to reproduce this here, feel free to download that repo and try it out yourself (make sure to switch to the ‘toxicity-model’ branch.

This is the output I got for model.classify(['you suck','I love you'])

model output len 7
Label identity_attack
Results Array [
  Object {
    "match": false,
    "probabilities": Float32Array [
      0.9659663438796997,
      0.03403365984559059,
    ],
  },
  Object {
    "match": false,
    "probabilities": Float32Array [
      0.999956488609314,
      0.000043553249270189553,
    ],
  },
]
Label insult
Results Array [
  Object {
    "match": true,
    "probabilities": Float32Array [
      0.08124707639217377,
      0.918752908706665,
    ],
  },
  Object {
    "match": false,
    "probabilities": Float32Array [
      0.9995761513710022,
      0.00042380779632367194,
    ],
  },
]
Label obscene
Results Array [
  Object {
    "match": null,
    "probabilities": Float32Array [
      0.39931538701057434,
      0.6006845831871033,
    ],
  },
  Object {
    "match": false,
    "probabilities": Float32Array [
      0.9999582767486572,
      0.000041756517020985484,
    ],
  },
]
Label severe_toxicity
Results Array [
  Object {
    "match": false,
    "probabilities": Float32Array [
      0.9970395565032959,
      0.002960433252155781,
    ],
  },
  Object {
    "match": false,
    "probabilities": Float32Array [
      1,
      4.12830054585811e-8,
    ],
  },
]
Label sexual_explicit
Results Array [
  Object {
    "match": null,
    "probabilities": Float32Array [
      0.7053250670433044,
      0.29467496275901794,
    ],
  },
  Object {
    "match": false,
    "probabilities": Float32Array [
      0.9999380111694336,
      0.0000619467900833115,
    ],
  },
]
Label threat
Results Array [
  Object {
    "match": false,
    "probabilities": Float32Array [
      0.9106739163398743,
      0.08932609856128693,
    ],
  },
  Object {
    "match": false,
    "probabilities": Float32Array [
      0.9998704195022583,
      0.00012954739213455468,
    ],
  },
]
Label toxicity
Results Array [
  Object {
    "match": true,
    "probabilities": Float32Array [
      0.031176727265119553,
      0.9688233137130737,
    ],
  },
  Object {
    "match": false,
    "probabilities": Float32Array [
      0.9989683628082275,
      0.00103162438608706,
    ],
  },
]
0reactions
google-ml-butler[bot]commented, Oct 13, 2020

Are you satisfied with the resolution of your issue? Yes No

Read more comments on GitHub >

github_iconTop Results From Across the Web

Hooks API Reference - React
Returns a stateful value, and a function to update it. During the initial render, the returned state ( state ) is the same...
Read more >
Mastering React's Stable Values - Shopify Engineering
The concept of stable value is a distinctly React term, and especially ... that is constant – the hook will always return the...
Read more >
Gold - Wikipedia
It occurs in a solid solution series with the native element silver (as electrum), naturally alloyed with other metals like copper and palladium, ......
Read more >
Biphenyl | C6H5C6H5 - PubChem - NIH
CAMEO Chemicals; CAS Common Chemistry; ChemIDplus; DTP/NCI; ... This substance decomposes on heating producing toxic gases, acrid smokes and fumes.
Read more >
Layout Props - React Native
You can try for example to add or remove squares from the UI while changing the values of the property flexWrap .
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found