question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Advices on Multi-GPU support?

See original GitHub issue

Hi Ender, thanks for your work!

There have been some requests for multi-gpu support(e.g. #51). I am now trying to write a multi-gpu version based on your code.

However, after looking into the code, it seems that the current structure does not support multi-gpu well. For example. if I modify train_val.py in this way:

      with tf.variable_scope(tf.get_variable_scope()):
        for i in range(2):
            with tf.device("/gpu:" + str(i)):
                with tf.name_scope("tower_" + str(i)) as scope:
                    # Build the main computation graph
                    layers = self.net.create_architecture(sess,'TRAIN', self.num_classes, tag='default',
                                                          anchor_scales=cfg.ANCHOR_SCALES,
                                                          anchor_ratios=cfg.ANCHOR_RATIOS)
                    # Define the loss
                    loss = layers['total_loss']
                    losses.append(loss)
                    
                    tf.get_variable_scope().reuse_variables()
                    
                    grads = self.optimizer.compute_gradients(loss)
                    
                    tower_grads.append(grads)
                    scopes.append(scope)
      # Compute the gradients wrt the loss                  
      gvs = self.average_gradients(tower_grads)

It can not work because the network class has only one “self.image” so an error of

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'tower_0/Placeholder' with dtype float

will be throwed.

Can you give any advises of how to implement a multi-gpu version of this code?

many thanks.

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:10 (4 by maintainers)

github_iconTop GitHub Comments

13reactions
ppwwyyxxcommented, Oct 13, 2017
0reactions
Atmegalcommented, Sep 14, 2019

It seems like the errors are caused by the nms() used in tf.py_func. When I changed it into py_nms, the errors are solved. However, the time complicity are increased a lot.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to scale training on multiple GPUs
In order to train models in a timely fashion, it is necessary to train them with multiple GPUs. We need to scale training...
Read more >
Multi-GPU Programming with Standard Parallel C++, Part 1
With current compilers, C++ parallel algorithms target single GPUs only and explicit MPI parallelism is needed to target multiple GPUs.
Read more >
How To Build and Use a Multi GPU System for Deep ...
Using dual-GPU cards is relatively straightforward and most software will support, however not so if you two CPUs. I would recommend to stick...
Read more >
Multi GPU advice | PCSPECIALIST
Depends on the size of the box and the motherboard PCIe slot placement I guess. And the size of the graphics cards i.e....
Read more >
How to trick out your gaming PC with multiple graphics cards
Nvidia SLI and AMD CrossFire (seen here) multi-GPU configurations will work only on compatible motherboards that support the technology.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found