Mixnet conversion
See original GitHub issueHi!
I’m trying to convert the frozen model of mixnet-s. But when I feed the .pb
file of this model to the converter, I get the following error:
OutOfRangeError: Node 'mixnet-s/mixnet_model/stem/batch_normalization/FusedBatchNormV3' (type: 'Add', num of outputs: 1) does not have output 5
I’m relatively new to TF, so I’m still figuring out my ways. Please let me know how to solve this issue.
Thanks in advance!
Issue Analytics
- State:
- Created 4 years ago
- Comments:15
Top Results From Across the Web
MixNet - Hugging Face
MixNet is a type of convolutional neural network discovered via AutoML that utilises MixConvs instead of regular depthwise convolutions.
Read more >mixnet-l — OpenVINO™ documentation — Version(latest)
Converted Model¶. Object classifier according to ImageNet classes, name - logits , shape - 1,1000 , output data format ...
Read more >Mixnet - Applied Cryptography - YouTube
Mixnet - Applied Cryptography. 2.5K views 10 years ago Applied ... Proxy Re-encryption ( Transformation Encryption). Bill Buchanan OBE.
Read more >About Mixnet
Mixnet is privately held and was founded in 1995 as a data conversion and document management company. With the growth of the Internet...
Read more >Parallelizing a mixnet prototype | nVotes online voting system
In this post we show benchmarks obtained from parallelization work on a mixnet based voting prototype in order to achieve large scale ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hi
Just ran into the same issue.
Environment config:
The model that I am trying to convert is based on the TensorFlow ssdlite_mobilenet_v2_coco, retrained on 4 classes on an input of 768x768.
I have exported the graph for inference using export_inference_graph.py.
I have optimized the saved model after loading the graph using strip unused:
I have attempted to convert the model using the following:
In my case:
Any thoughts? Thanks
Hi.
Im having a similar issue.
Im using Google Colab, and they recently changed some back end config where Tensorflow 1.14-GPU no longer finds some libraries needed to load the GPU (so training takes forever).
In an attempt to use the default Tensorflow - 1.15, and finding am having the same exact error when converting Mobilenet 224 1.0 to CoreML.
With Tensorflow 1.14, tf-coreml was able to load frozen PBs and export them to CoreML sans issue.
With 1.15, and tf-coreml 1.1, and coremltools 3.1, I receive the same error:
OutOfRangeError: Node 'mixnet-s/mixnet_model/stem/batch_normalization/FusedBatchNormV3' type: 'Add', num of outputs: 1) does not have output 5
Models with TF 1.15:
My Tensorflow PB models are segmented graphs, with a feature extractor multiple classifiers turned into a pipeline. Here’s the PB
Feature Extractor.pb
Classifier.pb