question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Issues with quantized models

See original GitHub issue

As far as I can see, nearly every model that I’m quantizing is reporting this warning:

quantize: Skipping TEXCOORD_0; out of supported range.

Then when I attempt to view the model, it loads but is invisible.

For example, with this unprocessed model: Car_csi4.zip

car

If I run

 gltf-transform quantize .\model.gltf .\model_quant.glb

This is the output:

warn: quantize: Skipping TEXCOORD_0; out of supported range.
warn: quantize: Skipping TEXCOORD_0; out of supported range.
warn: quantize: Skipping TEXCOORD_0; out of supported range.
warn: quantize: Skipping TEXCOORD_0; out of supported range.
warn: quantize: Skipping TEXCOORD_0; out of supported range.
warn: quantize: Skipping TEXCOORD_0; out of supported range.
warn: quantize: Skipping TEXCOORD_0; out of supported range.
warn: quantize: Skipping TEXCOORD_0; out of supported range.
warn: quantize: Skipping TEXCOORD_0; out of supported range.
info: model.gltf (1.39 MB) → model_quant.glb (1.06 MB)

And the resulting quantized model:

model_quant.zip

blank

As you can see, there are a whole load of new errors.

If I run gltfpack on the original model all the errors are removed and it loads correctly.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
donmccurdycommented, May 22, 2021

Same, yes!

0reactions
donmccurdycommented, May 22, 2021

I think the draco errors are still occurring though

Between the ‘byteStride’ fix and cleaning up the empty accessors (https://github.com/donmccurdy/glTF-Transform/issues/259#issuecomment-846364403) it seems like Draco is working on my side now (tested with the car example) but maybe something else is different here, or it’s relying on the v0.11.1 changes… I went ahead and published v0.11.1, let me know if that doesn’t resolve it!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Issues with running inference on quantized model #1454
I have run the example that train a bert-base model with MoQ, then trying to load it back for inference using the code...
Read more >
A Survey of Quantization Methods for Efficient Neural Network ...
There is a large body of literature that has focused on addressing these issues by making NN models more efficient (in terms of...
Read more >
Analysis of Quantized Models - OpenReview
TL;DR: In this paper, we studied efficient training of loss-aware weight-quantized networks with quantized gradient in a distributed environment ...
Read more >
A Tale of Model Quantization in TF Lite - Wandb
Model optimization strategies and quantization techniques to help deploy machine learning models in resource constrained environments.
Read more >
Understanding and Overcoming the Challenges of Efficient ...
models. Understanding the challenges of transformer quantization and designing a robust ... activation quantization issue without a signif-.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found