question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Huge difference between the output from tflite model and kmodel

See original GitHub issue

Hi,

I am trying to convert a tflite model to kmodel but during testing the output from both models, I found a huge difference after conversion.

Here is my tflite model and converted kmodel. Archive.zip I used command ncc compile model.tflite model.kmodel -i tflite -o kmodel -t k210 --inference-type uint8 --dataset images --input-mean 0.5 --input-std 0.5 for the conversion.

And my input range originally being [-1, 1]

Also I tried float inference and the result is the same. The output from kmodel is of range 10^2 while the output from tflite model is 10^-2

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
zye1996commented, Apr 17, 2020

Figured out with option --weights-quantize-threshold

0reactions
zye1996commented, Apr 17, 2020

CI from master https://dev.azure.com/sunnycase/nncase/_build/results?buildId=156&view=artifacts&type=publishedArtifacts

I tested with the CI verision and the compiled model works fine. However when I did the compilation, it throws out warning:

WARN: Conv2D_3 Fallback to float conv2d due to weights divergence.
WARN: Conv2D_5 Fallback to float conv2d due to weights divergence.
WARN: Conv2D_13 Fallback to float conv2d due to weights divergence.
WARN: Conv2D_21 Fallback to float conv2d due to weights divergence.

I would assume it will degrade the performance of using kpu, is there any solution to this?

Read more comments on GitHub >

github_iconTop Results From Across the Web

TfLite Model is giving different output on Android app and ...
TfLite Model is giving different output on Android app and in python for same inputs. Outputs using python are realistic but for java...
Read more >
TfLite Model is giving different output on Android app and ...
Solved by myself! Add the new line so that the bytes are returned in LITTLE_ENDIAN. By default, the order of a ByteBuffer object...
Read more >
TFLM model predictions differ to TFLite model predictions
The problem is that the prediction result made with TFLM model running on the Arduino Nano BLE 33 Sense differs with the prediction...
Read more >
Using TensorFlow Lite to Speed up Predictions - Michael Wurm
Speed up predictions on individual records or small batches by converting a Keras/TensorFlow 2.1 model to TensorFlow Lite · kmodel-predict-batch: ...
Read more >
Basic knowledge of MaixPy AI hardware acceleration
Different software can only recognize models in a specific format. KPU only recognizes models in .kmodel format. Generally, models trained on computers do ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found