does tfdeploy support multiple output in a graph?
See original GitHub issueIn the example, tfdeploy get the result of y1=W1x1+b1 by
result1 = y1.eval({x1: batch})
If I have a graph with two output y2=W2(W1x+b1)+b2, and y3=W3(W1x+b1)+b3, in tensorflow I can use
sess.run([y2, y3])
to get y2 and y3 simutaneously while avoiding redundant computation(of y1=W1x1+b1).
is it possible to do the same thing with tfdeploy? or I have to use two commands like below
result2 = y2.eval({x1: batch})
result3 = y3.eval({x1: batch})
Issue Analytics
- State:
- Created 7 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
riga/tfdeploy: Deploy tensorflow graphs for fast evaluation and ...
does not support GPUs (maybe gnumpy is worth a try here). How? The central class is tfdeploy.Model . The following two examples demonstrate...
Read more >tfdeploy · PyPI
The central class is ``tfdeploy.Model``. The following two examples demonstrate how a model can be created from a tensorflow graph, saved to
Read more >tfdeploy — tfdeploy 0.4.0 documentation
import tensorflow as tf import tfdeploy as td # build your graph, ... Op instances can have multiple implementations, i.e., different methods that...
Read more >tfdeploy - Rust - Docs.rs
... 3 to each input component let model = tfdeploy::tf::for_path("tests/models/plus3.pb").unwrap(); // "input" and "output" are tensorflow graph node names.
Read more >How to have multiple outputs in a tensorflow SavedModel?
with tf.Session(graph=tf.Graph()) as sess: # name="" is important to ensure we don't get spurious prefixing tf.import_graph_def(graph_def, ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Yep.
tensorflow has the advantage to be fully backed by a customized & optimized C++ backend that performs all heavy operations.
tfdeploy, on the other hand, essentially relies on bare numpy operations which sometimes have to be combined to exactly resemble the behavior of tensorflow. Conv and pooling ops are good examples. The drawback is that combinations are implemented and executed in Python. And sometimes even numpy functions aren’t completely backed by equivalent C++ functions, but they use different python calls to achieve the desired functionality.
Concerning the tfdeploy conv and pooling ops: I have one or two ideas that might improve the performance. And maybe it’s worth looking into scipy convolve, but this will also require to do some preprocessing, e.g., to ensure the same padding rules.
Hi @ugtony,
yes, this is also possible in tfdeploy, although this feature is quite hidden as there’s no such thing as a session object that can evaluate tensors simultaneously.
Per
eval
invocation, the intermediate results of all depending tensors and ops are cached.The actual signature of
eval
is:_uuid
is used for caching, initially set whenNone
and passed to all depending eval calls. So all you have to do is:I only tested this feature, but never had to use it productively, so feedback is appreciated 😉