question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Can not convert TensorFlow model to TensorFlowJS: Unsupported Ops in the model before optimization

See original GitHub issue

Hi, I work on VS Code and we are trying to use TensorFlow for automatic programming language classification based on file content. To be concise, we need this working in a browser, thus we would like to use TensorFlowJS, and we would like to reuse this TensorFlow model https://github.com/yoeo/guesslang/tree/master/guesslang/data/model We are trying to convert this Model to a TensorFlowJs model using your conversion scripts and we are hitting the following issue:

$ tensorflowjs_wizard
2021-03-17 23:39:53.238316: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-03-17 23:39:53.238359: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Welcome to TensorFlow.js Converter.
? Please provide the path of model file or the directory that contains model files.
If you are converting TFHub module please provide the URL. /workspaces/tfjs-notebooks/master/guesslang-master/guesslang/data/model/saved_model.pb
? What is your input model format? (auto-detected format is marked with *) Tensorflow Saved Model *
? What is tags for the saved model? serve
? What is signature name of the model? signature name: predict
? Do you want to compress the model? (this will decrease the model precision.) No compression (Higher accuracy)
? Please enter shard size (in bytes) of the weight files? 4194304
? Do you want to skip op validation?
This will allow conversion of unsupported ops,
you can implement them as custom ops in tfjs-converter. No
? Do you want to strip debug ops?
This will improve model execution performance. Yes
? Do you want to enable Control Flow V2 ops?
This will improve branch and loop execution performance. Yes
? Do you want to provide metadata?
Provide your own metadata in the form:
metadata_key:path/metadata.json
Separate multiple metadata by comma.
? Which directory do you want to save the converted model in? /workspaces/tfjs-notebooks/src/new_model
? The output already directory exists, do you want to overwrite it? Yes
converter command generated:
tensorflowjs_converter --control_flow_v2=True --input_format=tf_saved_model --metadata= --saved_model_tags=serve --signature_name=predict --strip_debug_ops=True --weight_shard_size_bytes=4194304 /workspaces/tfjs-notebooks/master/guesslang-master/guesslang/data/model /workspaces/tfjs-notebooks/src/new_model
15:34
but alas...
ValueError: Unsupported Ops in the model before optimization
SparseFillEmptyRows, StringToHashBucketFast, SparseReshape, SparseSegmentSum, SparseSegmentMean
  1. Is it even possible to convert this model to TensorFlowJS?
  2. Do we have something misconfigured

I am a TensorFlow beginner so I apologise for not the nicest issue. Any help is appreciated. Thank you fyi @tylerLeonhardt

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:29

github_iconTop GitHub Comments

5reactions
pyu10055commented, May 29, 2021

@isidorn @TylerLeonhardt The execution issue has been fixed, and the model is compatible with TFJS now. you can try out the demo here https://storage.googleapis.com/tfjs-testing/guesslang-demo/index.html Since the model is fairly small and involves dynamic shaped string tensors, it is actually faster on CPU backend. Currently the demo is compiled with the latest code from the master branch of TFJS, we will release a new version next week, after that you should be able to use the npm package directly.

3reactions
pyu10055commented, Jun 4, 2021

Closing this issue since the model has been fully supported with 3.7.0, please keep us posted on your progresses. cheers, happy Friday.

Read more comments on GitHub >

github_iconTop Results From Across the Web

ValueError: Unsupported Ops in the model before optimization ...
Hi, I'm trying to convert a .pb file using tensorflowjs_converter --input_format=tf_frozen_model --output_node_names='LogSoftmax,concat_40 ...
Read more >
Why Unsupported Ops in the model before optimization ...
I try to convert a trained model with TensorFlow into a tensorflowjs model ... but I have no idea why Op 'IteratorV2' and...
Read more >
Tensorflowjs Conversion Error: "ValueError: Unsupported Ops"
The short answer is yes, you will need to change them. TensorflowJS will change the ops for optimisation purposes, but not all the...
Read more >
tfjs-converter - GitHub Pages
js converter is an open source library to load a pretrained TensorFlow SavedModel, Frozen Model or Session Bundle into the browser and run...
Read more >
Post-training integer quantization | TensorFlow Lite
Now you can convert the trained model to TensorFlow Lite format using the TensorFlow Lite Converter, and apply varying degrees of quantization.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found