question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

TFLite model conversion

See original GitHub issue

Describe the bug After converting the model to .pb file using firenet-to-protobuf I am trying to convert it to TFlite model for inferencing on Google Coral USB accelerator, however while using tflite_convert command line to do so I am encountering the error as mentioned below in the output section and there is no resolution as of yet, is it the dropout layer which is the issue?

To Reproduce: (please complete the following information):

  1. Convert model to pb file using firenet-to-protobuf.py file.
  2. Run tensorflow lite conversion command as ‘tflit_convert --graph_def_file=‘path_to_filr/firenet.pb --output_format=TFLITE --output_file=firenet_v1__new.tflite --input_arrays=input --output_arrays=final_result’’

Error output received

pi@raspberrypi:~ $ tflite_convert --graph_def_file=/home/pi/Downloads/FireDetection/code/firenet.pb --output_format=TFLITE --output_file=firenet_v1__new.tflite  --input_arrays=input --output_arrays=final_result
WARNING: Logging before flag parsing goes to stderr.
W0729 04:04:35.524526 3069684432 deprecation_wrapper.py:118] From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead.

W0729 04:04:35.525118 3069684432 deprecation_wrapper.py:118] From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

W0729 04:04:35.525362 3069684432 deprecation_wrapper.py:118] From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.COMPILER_VERSION is deprecated. Please use tf.version.COMPILER_VERSION instead.

W0729 04:04:35.525576 3069684432 deprecation_wrapper.py:118] From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.CXX11_ABI_FLAG is deprecated. Please use tf.sysconfig.CXX11_ABI_FLAG instead.

W0729 04:04:35.525793 3069684432 deprecation_wrapper.py:118] From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.ConditionalAccumulator is deprecated. Please use tf.compat.v1.ConditionalAccumulator instead.

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/importer.py", line 428, in import_graph_def
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef expected inputs '' do not match 1 inputs specified; Op<name=Const; signature= -> output:dtype; attr=value:tensor; attr=dtype:type>; NodeDef: {{node Dropout_1/cond/dropout/random_uniform/max}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/tflite_convert", line 10, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/lite/python/tflite_convert.py", line 503, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 300, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/lite/python/tflite_convert.py", line 499, in run_main
    _convert_tf1_model(tflite_flags)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/lite/python/tflite_convert.py", line 124, in _convert_tf1_model
    converter = _get_toco_converter(flags)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/lite/python/tflite_convert.py", line 111, in _get_toco_converter
    return converter_fn(**converter_kwargs)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/lite/python/lite.py", line 633, in from_frozen_graph
    _import_graph_def(graph_def, name="")
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/importer.py", line 432, in import_graph_def
    raise ValueError(str(e))
ValueError: NodeDef expected inputs '' do not match 1 inputs specified; Op<name=Const; signature= -> output:dtype; attr=value:tensor; attr=dtype:type>; NodeDef: {{node Dropout_1/cond/dropout/random_uniform/max}}

Computing Environment (please complete the following information):

  • OS: Raspbian Buster
  • Tensorflow Version: 1.13.1
  • OpenCV Version: 3.2.0

Additional context

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
tobybreckoncommented, Jul 29, 2019

Reading here - https://www.tensorflow.org/lite/convert/cmdline_examples - it appears I may be correct on the command line arguments front.

0reactions
ashwin-phadkecommented, Jul 31, 2019

Do you mean “training” with the dropout layers commented out or runtime test/inference using the pre-trained model ?

Training with the dropout layers commented out.

I ask, as this repo doesn’t provide code to do the training part itself.

On my todo list is as follows:

  1. change the lines 48/51 in firenet.py to be similarly inside an if statement for the training boolean flag within the construct_firenet() routine (and similarly for other 2 models in repo)
  2. test runtime test/inference with and without the training flag set over the first N frames of the sample mp4 video provided with the models to see if the output differs
  3. possibly wrap this test into a python unit test framework specific to this repo https://docs.python.org/3/library/unittest.html

That is really great to know.

re: Edit 2 - sorry you are at the end of my tflite_convert knowledge.

It’s okay , I figured it out. The solution is to build tensorflow from source along with bazel with Neon instructions enabled in the flag.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Model conversion overview | TensorFlow Lite
The TensorFlow Lite converter takes a TensorFlow model and generates a TensorFlow Lite model (an optimized FlatBuffer format identified by the .
Read more >
Tensorflow Lite Converter Example!! | by Maheshwar Ligade
TFLite converter to Convert different tensorflow model on edge devices, mobile devices or embedded devices.
Read more >
TensorFlow Lite converter
The TensorFlow Lite converter takes a TensorFlow model and generates a TensorFlow Lite model (an optimized FlatBuffer format identified by the .tflite file ......
Read more >
Convert your Tensorflow Object Detection model to ...
Export frozen inference graph for TFLite; Build Tensorflow from source (needed for the third step); Using TOCO to create an optimized TensorFlow Lite...
Read more >
Converter Python API guide - tensorflow
This page describes how to convert TensorFlow models into the TensorFlow Lite format ... converter.convert() # Save the model. with open('model.tflite', ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found