question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

does tfdeploy support multiple output in a graph?

See original GitHub issue

In the example, tfdeploy get the result of y1=W1x1+b1 by

result1 = y1.eval({x1: batch})

If I have a graph with two output y2=W2(W1x+b1)+b2, and y3=W3(W1x+b1)+b3, in tensorflow I can use

sess.run([y2, y3])

to get y2 and y3 simutaneously while avoiding redundant computation(of y1=W1x1+b1).

is it possible to do the same thing with tfdeploy? or I have to use two commands like below

result2 = y2.eval({x1: batch})
result3 = y3.eval({x1: batch})

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
rigacommented, Dec 27, 2016

Yep.

tensorflow has the advantage to be fully backed by a customized & optimized C++ backend that performs all heavy operations.

tfdeploy, on the other hand, essentially relies on bare numpy operations which sometimes have to be combined to exactly resemble the behavior of tensorflow. Conv and pooling ops are good examples. The drawback is that combinations are implemented and executed in Python. And sometimes even numpy functions aren’t completely backed by equivalent C++ functions, but they use different python calls to achieve the desired functionality.

Concerning the tfdeploy conv and pooling ops: I have one or two ideas that might improve the performance. And maybe it’s worth looking into scipy convolve, but this will also require to do some preprocessing, e.g., to ensure the same padding rules.

1reaction
rigacommented, Dec 22, 2016

Hi @ugtony,

yes, this is also possible in tfdeploy, although this feature is quite hidden as there’s no such thing as a session object that can evaluate tensors simultaneously.

Per eval invocation, the intermediate results of all depending tensors and ops are cached.

The actual signature of eval is:

eval(feed_dict=None, _uuid=None)

_uuid is used for caching, initially set when None and passed to all depending eval calls. So all you have to do is:

from uuid import uuid4

...

uuid = uuid4()
result2 = y2.eval({x1: batch}, uuid)
result3 = y3.eval({x1: batch}, uuid)

I only tested this feature, but never had to use it productively, so feedback is appreciated 😉

Read more comments on GitHub >

github_iconTop Results From Across the Web

riga/tfdeploy: Deploy tensorflow graphs for fast evaluation and ...
does not support GPUs (maybe gnumpy is worth a try here). How? The central class is tfdeploy.Model . The following two examples demonstrate...
Read more >
tfdeploy · PyPI
The central class is ``tfdeploy.Model``. The following two examples demonstrate how a model can be created from a tensorflow graph, saved to
Read more >
tfdeploy — tfdeploy 0.4.0 documentation
import tensorflow as tf import tfdeploy as td # build your graph, ... Op instances can have multiple implementations, i.e., different methods that...
Read more >
tfdeploy - Rust - Docs.rs
... 3 to each input component let model = tfdeploy::tf::for_path("tests/models/plus3.pb").unwrap(); // "input" and "output" are tensorflow graph node names.
Read more >
How to have multiple outputs in a tensorflow SavedModel?
with tf.Session(graph=tf.Graph()) as sess: # name="" is important to ensure we don't get spurious prefixing tf.import_graph_def(graph_def, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found