model works in nodejs and fails in browser
See original GitHub issuemodel works in nodejs and fails in browser
saved_model
withtfjs_node
in nodejs:
const model = tf.node.loadSavedModel(modelPath);
const result = await model.predict(image);
pass no issues
graph_model
withtfjs_node
in nodejs:
const model = tf.loadGraphModel(modelPath);
const result = await model.predict(image);
fail: error due to dynamic ops
This execution contains the node 'filtered_detections/map/while/Exit_4', which has the dynamic op 'Exit'
why? it’s the same model, just converted from saved_model
to graph_model
using tensorflowjs_convert
in a simplest possible way
ok, lets try again with executeAsync
const model = tf.loadGraphModel(modelPath);
const result = await model.executeAsync(image);
pass now it works
graph_model
withtfjs
in browser usingwebgl
backend
const model = tf.loadGraphModel(modelPath);
const result = await model.executeAsync(image);
fail no matter what, i cannot get this to work in browser (same model and code that works in nodejs)
this is a typical error:
Error: Size(442368) must match the product of shape 65504,4
id @ util.js:281
oS @ Reshape.js:26
kernelFunc @ engine.js:431
(anonymous) @ engine.js:483
scopedRun @ engine.js:324
runKernelFunc @ engine.js:481
reshape_ @ reshape.js:58
reshape__op @ operation.js:44
executeOp$g @ transformation_executor.js:34
(anonymous) @ operation_executor.js:78
(anonymous) @ engine.js:314
scopedRun @ engine.js:324
tidy @ engine.js:313
tidy @ globals.js:173
(anonymous) @ operation_executor.js:78
executeOp$h @ operation_executor.js:90
processStack @ graph_executor.js:390
executeWithControlFlow @ graph_executor.js:350
_executeAsync @ graph_executor.js:285
executeAsync @ graph_executor.js:257
executeAsync @ graph_model.js:304
environment: tfjs 2.6.0 on ubuntu 20.10
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (4 by maintainers)
Top Results From Across the Web
Uncaught errors in Node.js and the browser
This article looks into what happens with unhandled errors both in Node.js and in the browser.
Read more >6 reasons your Node.js apps are failing - IBM Developer
1. Uncaught exceptions or error events in JavaScript code · 2. Excessive memory usage, which may result in an out-of-memory error · 3....
Read more >Top 10 Most Common Node.js Developer Mistakes - Toptal
Mistake #3: Deeply Nesting Callbacks ... The more complex the task, the worse this can get. By nesting callbacks in such a way,...
Read more >How do I resolve "Cannot find module" error using Node.js?
Using npm install installs the module into the current directory only (in a subdirectory called node_modules ). Is app.js located under ...
Read more >Node.js Error: Cannot GET/ from running the url on the web ...
Reason: Since in the server file, we create a get route for '/messages' URL but inside the browser, we try to get the...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@vladmandic TensorFlow.js does not dispose GL texture by default, the reason we want to reuse the textures for consecutive runs. If the model is very large and generate a huge amount of intermediate textures, this would cause GL OOM. We do have a flag to enable aggressive texture disposal https://github.com/tensorflow/tfjs/blob/038522b8f228962c1cc0d179a8e445df8ea419a7/tfjs-backend-webgl/src/flags_webgl.ts#L178 You can set the WEBGL_DELETE_TEXTURE_THRESHOLD in bytes to trigger texture disposal if the total texture size exceed this limit.
Are you satisfied with the resolution of your issue? Yes No