Which config should I use to train model for Android (covert later to .tflite format)?
See original GitHub issueWhich config should I use to train model for Android (covert later to .tflite format)?
Tensorflow has TOCO converter (.pb > .tflite)
But what config should I use to train pb file?
ssdlite_mobilenet_v1_coco.config
, ssd_mobilenet_v1_300x300_coco14_sync.config
or ssd_mobilenet_v1_quantized_300x300_coco14_sync.config
?
Issue Analytics
- State:
- Created 5 years ago
- Comments:13
Top Results From Across the Web
On-Device Training with TensorFlow Lite
This tutorial uses Python to train and convert a TensorFlow model before incorporating it into an Android app. Get started by installing and ......
Read more >TensorFlow Lite Android Example [Beginners] | Analytics Vidhya
The trained TensorFlow model on the disk will convert into TensorFlow Lite file format (.tflite) using the TensorFlow Lite converter. Then we can...
Read more >How to Train a YOLOv4 Tiny model and Use TensorFlow Lite
In this post, we walk through how to train an end to end custom mobile object detection model. We will use the state...
Read more >How to convert a custom model to TensorFlow Lite ... - Heartbeat
In this post, we'll walk through how to convert a custom model to TensorFlow Lite in order to use it to run inference...
Read more >Build Android app for object detection (TensorFlow 1.x)
Train a Deep Learning model for custom object detection using TensorFlow 1.x in Google Colab and convert it to a TFLite model for...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Take the help or don’t.
"Our frozen inference graphs are generated using the v1.8.0 release version of Tensorflow and we do not guarantee that these will work with other versions; this being said, each frozen inference graph can be regenerated using your current version of Tensorflow by re-running the exporter, pointing it at the model directory as well as the corresponding config file in samples/configs. "
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
Yes it is but … the models are trained on other versions. You are able to bypass some bugs if you export the model before you train … And I just saied it could help. I don’t know if it truely helps. You are welcome.