Huge difference between the output from tflite model and kmodel
See original GitHub issueHi,
I am trying to convert a tflite model to kmodel but during testing the output from both models, I found a huge difference after conversion.
Here is my tflite model and converted kmodel.
Archive.zip
I used command
ncc compile model.tflite model.kmodel -i tflite -o kmodel -t k210 --inference-type uint8 --dataset images --input-mean 0.5 --input-std 0.5
for the conversion.
And my input range originally being [-1, 1]
Also I tried float inference and the result is the same. The output from kmodel is of range 10^2 while the output from tflite model is 10^-2
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
TfLite Model is giving different output on Android app and ...
TfLite Model is giving different output on Android app and in python for same inputs. Outputs using python are realistic but for java...
Read more >TfLite Model is giving different output on Android app and ...
Solved by myself! Add the new line so that the bytes are returned in LITTLE_ENDIAN. By default, the order of a ByteBuffer object...
Read more >TFLM model predictions differ to TFLite model predictions
The problem is that the prediction result made with TFLM model running on the Arduino Nano BLE 33 Sense differs with the prediction...
Read more >Using TensorFlow Lite to Speed up Predictions - Michael Wurm
Speed up predictions on individual records or small batches by converting a Keras/TensorFlow 2.1 model to TensorFlow Lite · kmodel-predict-batch: ...
Read more >Basic knowledge of MaixPy AI hardware acceleration
Different software can only recognize models in a specific format. KPU only recognizes models in .kmodel format. Generally, models trained on computers do ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Figured out with option --weights-quantize-threshold
I tested with the CI verision and the compiled model works fine. However when I did the compilation, it throws out warning:
I would assume it will degrade the performance of using kpu, is there any solution to this?