Advices on Multi-GPU support?
See original GitHub issueHi Ender, thanks for your work!
There have been some requests for multi-gpu support(e.g. #51). I am now trying to write a multi-gpu version based on your code.
However, after looking into the code, it seems that the current structure does not support multi-gpu well. For example. if I modify train_val.py in this way:
with tf.variable_scope(tf.get_variable_scope()):
for i in range(2):
with tf.device("/gpu:" + str(i)):
with tf.name_scope("tower_" + str(i)) as scope:
# Build the main computation graph
layers = self.net.create_architecture(sess,'TRAIN', self.num_classes, tag='default',
anchor_scales=cfg.ANCHOR_SCALES,
anchor_ratios=cfg.ANCHOR_RATIOS)
# Define the loss
loss = layers['total_loss']
losses.append(loss)
tf.get_variable_scope().reuse_variables()
grads = self.optimizer.compute_gradients(loss)
tower_grads.append(grads)
scopes.append(scope)
# Compute the gradients wrt the loss
gvs = self.average_gradients(tower_grads)
It can not work because the network class has only one “self.image” so an error of
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'tower_0/Placeholder' with dtype float
will be throwed.
Can you give any advises of how to implement a multi-gpu version of this code?
many thanks.
Issue Analytics
- State:
- Created 6 years ago
- Comments:10 (4 by maintainers)
Top Results From Across the Web
How to scale training on multiple GPUs
In order to train models in a timely fashion, it is necessary to train them with multiple GPUs. We need to scale training...
Read more >Multi-GPU Programming with Standard Parallel C++, Part 1
With current compilers, C++ parallel algorithms target single GPUs only and explicit MPI parallelism is needed to target multiple GPUs.
Read more >How To Build and Use a Multi GPU System for Deep ...
Using dual-GPU cards is relatively straightforward and most software will support, however not so if you two CPUs. I would recommend to stick...
Read more >Multi GPU advice | PCSPECIALIST
Depends on the size of the box and the motherboard PCIe slot placement I guess. And the size of the graphics cards i.e....
Read more >How to trick out your gaming PC with multiple graphics cards
Nvidia SLI and AMD CrossFire (seen here) multi-GPU configurations will work only on compatible motherboards that support the technology.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I recently wrote one with multi-gpu support. https://github.com/ppwwyyxx/tensorpack/tree/master/examples/FasterRCNN
It seems like the errors are caused by the nms() used in tf.py_func. When I changed it into py_nms, the errors are solved. However, the time complicity are increased a lot.