TFLite model conversion
See original GitHub issueDescribe the bug After converting the model to .pb file using firenet-to-protobuf I am trying to convert it to TFlite model for inferencing on Google Coral USB accelerator, however while using tflite_convert command line to do so I am encountering the error as mentioned below in the output section and there is no resolution as of yet, is it the dropout layer which is the issue?
To Reproduce: (please complete the following information):
- Convert model to pb file using firenet-to-protobuf.py file.
- Run tensorflow lite conversion command as ‘tflit_convert --graph_def_file=‘path_to_filr/firenet.pb --output_format=TFLITE --output_file=firenet_v1__new.tflite --input_arrays=input --output_arrays=final_result’’
Error output received
pi@raspberrypi:~ $ tflite_convert --graph_def_file=/home/pi/Downloads/FireDetection/code/firenet.pb --output_format=TFLITE --output_file=firenet_v1__new.tflite --input_arrays=input --output_arrays=final_result
WARNING: Logging before flag parsing goes to stderr.
W0729 04:04:35.524526 3069684432 deprecation_wrapper.py:118] From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead.
W0729 04:04:35.525118 3069684432 deprecation_wrapper.py:118] From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.
W0729 04:04:35.525362 3069684432 deprecation_wrapper.py:118] From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.COMPILER_VERSION is deprecated. Please use tf.version.COMPILER_VERSION instead.
W0729 04:04:35.525576 3069684432 deprecation_wrapper.py:118] From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.CXX11_ABI_FLAG is deprecated. Please use tf.sysconfig.CXX11_ABI_FLAG instead.
W0729 04:04:35.525793 3069684432 deprecation_wrapper.py:118] From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.ConditionalAccumulator is deprecated. Please use tf.compat.v1.ConditionalAccumulator instead.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/importer.py", line 428, in import_graph_def
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef expected inputs '' do not match 1 inputs specified; Op<name=Const; signature= -> output:dtype; attr=value:tensor; attr=dtype:type>; NodeDef: {{node Dropout_1/cond/dropout/random_uniform/max}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/tflite_convert", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/lite/python/tflite_convert.py", line 503, in main
app.run(main=run_main, argv=sys.argv[:1])
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/lite/python/tflite_convert.py", line 499, in run_main
_convert_tf1_model(tflite_flags)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/lite/python/tflite_convert.py", line 124, in _convert_tf1_model
converter = _get_toco_converter(flags)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/lite/python/tflite_convert.py", line 111, in _get_toco_converter
return converter_fn(**converter_kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/lite/python/lite.py", line 633, in from_frozen_graph
_import_graph_def(graph_def, name="")
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/importer.py", line 432, in import_graph_def
raise ValueError(str(e))
ValueError: NodeDef expected inputs '' do not match 1 inputs specified; Op<name=Const; signature= -> output:dtype; attr=value:tensor; attr=dtype:type>; NodeDef: {{node Dropout_1/cond/dropout/random_uniform/max}}
Computing Environment (please complete the following information):
- OS: Raspbian Buster
- Tensorflow Version: 1.13.1
- OpenCV Version: 3.2.0
Additional context
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (4 by maintainers)
Top Results From Across the Web
Model conversion overview | TensorFlow Lite
The TensorFlow Lite converter takes a TensorFlow model and generates a TensorFlow Lite model (an optimized FlatBuffer format identified by the .
Read more >Tensorflow Lite Converter Example!! | by Maheshwar Ligade
TFLite converter to Convert different tensorflow model on edge devices, mobile devices or embedded devices.
Read more >TensorFlow Lite converter
The TensorFlow Lite converter takes a TensorFlow model and generates a TensorFlow Lite model (an optimized FlatBuffer format identified by the .tflite file ......
Read more >Convert your Tensorflow Object Detection model to ...
Export frozen inference graph for TFLite; Build Tensorflow from source (needed for the third step); Using TOCO to create an optimized TensorFlow Lite...
Read more >Converter Python API guide - tensorflow
This page describes how to convert TensorFlow models into the TensorFlow Lite format ... converter.convert() # Save the model. with open('model.tflite', ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Reading here - https://www.tensorflow.org/lite/convert/cmdline_examples - it appears I may be correct on the command line arguments front.
Training with the dropout layers commented out.
That is really great to know.
It’s okay , I figured it out. The solution is to build tensorflow from source along with bazel with Neon instructions enabled in the flag.