Simplify TensorGraph API
See original GitHub issueI’m finding the TensorGraph API kind of cumbersome to work with. This is mainly because you have to call add_layer()
for every layer, which forces you to create a variable for every layer. For example, suppose a
and b
are two layers. As your loss function, you want to multiply them together and take the mean of all elements. (This is a real example I came across. a
contains the cost function for every task, and b
is the weights.) Currently, you have to write something like this:
graph.add_layer(a)
graph.add_layer(b)
mult = Multiply()
graph.add_layer(mult, parents=[a,b])
reducemean = ReduceMean()
graph.add_layer(reducemean, parents=[mult])
graph.set_loss(reducemean)
I suggest getting rid of add_layer()
. You would only need to tell the graph about layers that have special roles, like with set_loss()
or add_output()
. The parents of each layer would be passed as arguments to its constructor. With this change, the above code could be simplified to
graph.set_loss(ReduceMean(Multiply(a, b)))
Issue Analytics
- State:
- Created 6 years ago
- Comments:49 (47 by maintainers)
Top GitHub Comments
Can functionality be added to Osprey for https://www.tensorflow.org/api_docs/python/tf/contrib/learn/Evaluable
and
https://www.tensorflow.org/api_docs/python/tf/contrib/learn/Trainable
as these API are in tensorflow contrib they are likely going to become more widespread. They also allow for taking in iterators to deal with larger datasets.
Also making Tensorgraph implement these API would be great.
yeah I’m doing it as part of putting atomic convs in mainline.
https://github.com/lilleswing/deepchem/pull/13