Universal Sentence Encoder Model runs very slow after embedding large datasets
See original GitHub issueTensorFlow.js version 2.0.1
Node Version 12.18.0
OS Windows 7
Prerequisite:
yarn add @tensorflow/tfjs @tensorflow/tfjs-node @tensorflow-models/universal-sentence-encoder
Steps to Reproduce:
- Download and unzip debug use model.zip
- Run
node checktime.js
and note down the time taken to execute - Run
node embedlargedata.js
, this script will embed large amount of data using universal sentence encoder model (takes around 1 hr) - Run
node checktime.js
and note down the time taken to execute You will observe that after embedding large amount of data, the universal sentence encoder model will run very slow even for small datasets.
Issue Analytics
- State:
- Created 3 years ago
- Comments:5
Top Results From Across the Web
javascript - Universal Sentence Encoder tensorflowjs optimize ...
The time taken to create embeddings is too large as I need to create embedding for more than 1000 sentences which takes around...
Read more >tensorflow hub Universal Sentence Encoder prediction kept ...
As I generate more batches, it's getting slower to predict embedding. I was wondering if there a better way to do this?
Read more >Experiments Using Universal Sentence Encoder Embeddings ...
Running this model with equivalent speed to the DAN, assuming perfect linear speedup, would require 340 cores. ACCURACY. Given the enormous ...
Read more >Common issues | TensorFlow Hub
Often this is a problem specific to the machine running the code and not an issue with the library. Here is a list...
Read more >Universal Sentence Encoder - Google Research
We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. The models are efficient and ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@pyu10055 I think the issue is that on the benchmark page we get the USE output data as part of the
predict
function, which is unlike all our other models (so the benchmarking script’s tensor cleanup mechanism fails to apply to the USE). I sent a fix here: https://github.com/tensorflow/tfjs/pull/3510Are you satisfied with the resolution of your issue? Yes No