question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Using allow_growth on keras with tensorflow

See original GitHub issue

I’m using Keras with TensorFlow to train a large number of tiny networks (~4 layers, less than 30 nodes in each layer). Currently TF allocates all GPU memory to a single process and therefore prevents me from opening more learning processes in parallel. I found on TF document that I can use

config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)

to do this. However, I wasn’t able to integrate that into keras. Does someone know the way to initialize a tf session on keras? Thank you very much!

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Reactions:2
  • Comments:15 (2 by maintainers)

github_iconTop GitHub Comments

134reactions
zoltan-fedorcommented, Feb 15, 2018

@vijaycd , if you are still looking for an actual code you can copy-paste into your Keras code to have Tensorflow dynamically allocate the GPU memory:

import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True  # dynamically grow the memory used on the GPU
config.log_device_placement = True  # to log device placement (on which device the operation ran)
                                    # (nothing gets printed in Jupyter, only if you run it standalone)
sess = tf.Session(config=config)
set_session(sess)  # set this TensorFlow session as the default session for Keras
3reactions
vijaycdcommented, Oct 3, 2017

I am sorry, I am new to KEras. How do I use it in my .py file. I need to have the equivalent of

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.log_device_placement=True

sess = tf.Session(config=config)  #With the two options defined above

with tf.Session() as sess:
...

Thanks.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Using allow_growth memory option in Tensorflow and Keras
It prevents any new GPU process which consumes a GPU memory to be run on the same machine. Example of three processes which...
Read more >
Use a GPU | TensorFlow Core
Use a GPU · On this page · Setup · Overview · Logging device placement · Manual device placement · Limiting GPU memory...
Read more >
How to set dynamic memory growth on TF 2.1? - Stack Overflow
With previous versions of tensorflow+keras I was able to set an 'allow_growth' option and view realtime memory usage with nvidia-smi.
Read more >
How to limit GPU Memory in TensorFlow 2.0 (and 1.x)
When your GPU run out of memory..! Wanna limit your GPU memory(VRAM) usage in TensorFlow 2.0 ? You can find a detailed explanation...
Read more >
How to Manage GPU Resource Utilization in Tensorflow and ...
I'll show you how to keep Tensorflow and Keras from hogging all your VRAM, so that you can run multiple models on the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found