question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Question about using image samples and VRAM

See original GitHub issue

I’ve been messing around with the program all day now and it is very interesting. I have thousands of wallpapers I use I keep in a folder and I thought it would be cool to use one of them as an image sample. I was getting the CUDA out of memory runtime error when I tried to use the command

imagine "a psychedelic experience." --start-image-path ./fasfa.jpg

I’m sure I’m getting this error because of the 1080p resolution of the reference image and my 8GB of VRAM not being able to render/train the image or whatever it’s called. Is there a way to make the AI interpret the image at a lower resolution may be an alternative way to slow the training/rendering down to cope with the resolution of the image? I understand this is like stuff a 9-year-old could figure out but I just want to be 100% sure of the capabilities of this program. If there is no such option is that a feature that could be implemented into the software, To either interpret the resolution of any image as a singular image or able to slow the render speed or change the rate of the training (or change certain aspects of whatever the VRAM is used to do) to make it able to cope with the high res image? Or do I just have to crop it or manually lower the resolution? I tried to tag this issue as a question but I couldn’t find the option to do so.

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:9

github_iconTop GitHub Comments

3reactions
afiaka87commented, Mar 14, 2021

Yeah it’s no worries! Welcome to Python and the machine learning community!

Assuming you are a.) On linux, b.) Using an Nvidia GPU, c.) Have cuda installed properly:

Here’s how I do it.

This part’s kinda annoying, but you should try to use virtual environments with python. using your global python can cause bugs that are really tough to figure out.

python3 -m pip install virtualenv
mkdir -p ~/Projects/run_deep_daze
cd ~/Projects/run_deep_daze
python3 -m virtualenv .venv
source .venv/bin/activate
echo "You should be in a clean python virtual environment now. Packages installed here won't pollute your global python. Everytime you want to work on this project again, you will need to run 'source .venv/bin/activate' again. "

Double check that you’re in a virtualenv:

which python
echo "Your python path should have '.venv' in it by now. If its from /usr/local/bin, /usr/bin, or /bin, you need to run 'source .venv/bin/activate' again inside your project directory."

Important: Check your CUDA version with nvidia-smi. Change the numbers in e.g. +cu111 to your cuda version. At the time of this post +cu112 versions aren’t availabe, so just use +cu111 if you have cuda 11.2.

pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio===0.8.0 -f https://download.pytorch.org/whl/torch_stable.html

finally, install deep daze.

pip install deep-daze

Okay, now you’re ready to write and run some Python! Here’s a starting file.

To run it, just save it to “run.py” and then run: python run.py

from tqdm import trange

from deep_daze import Imagine

TEXT = 'a female mannequin dressed in a black button - down shirt and white palazzo pants' #@param {type:"string"}
NUM_LAYERS = 44 #@param {type:"number"}
SAVE_EVERY =  20 #@param {type:"number"}
IMAGE_WIDTH = 512 #@param {type:"number"}
SAVE_PROGRESS = True #@param {type:"boolean"}
LEARNING_RATE = 9e-6 #@param {type:"number"}
ITERATIONS = 1050 #@param {type:"number"}
EPOCHS = 8
BATCH_SIZE = 32
GRADIENT_ACCUMULATE_EVERY = 4
model = Imagine(
    text = TEXT,
    num_layers = NUM_LAYERS,
    save_every = SAVE_EVERY,
    image_width = IMAGE_WIDTH,
    lr = LEARNING_RATE,
    iterations = ITERATIONS,
    epochs = EPOCHS,
    save_progress = SAVE_PROGRESS,
    batch_size = BATCH_SIZE,
    gradient_accumulate_every = GRADIENT_ACCUMULATE_EVERY,
    open_folder = False # Set this to true if you want to open the folder you're in to view the files 
)

for epoch in trange(EPOCHS, desc = 'epochs'):
    for i in trange(ITERATIONS, desc = 'iteration'):
        model.train_step(epoch, i)
0reactions
zchris07commented, Aug 8, 2021

Yeah it’s no worries! Welcome to Python and the machine learning community!

Assuming you are a.) On linux, b.) Using an Nvidia GPU, c.) Have cuda installed properly:

Here’s how I do it.

This part’s kinda annoying, but you should try to use virtual environments with python. using your global python can cause bugs that are really tough to figure out.

python3 -m pip install virtualenv
mkdir -p ~/Projects/run_deep_daze
cd ~/Projects/run_deep_daze
python3 -m virtualenv .venv
source .venv/bin/activate
echo "You should be in a clean python virtual environment now. Packages installed here won't pollute your global python. Everytime you want to work on this project again, you will need to run 'source .venv/bin/activate' again. "

Double check that you’re in a virtualenv:

which python
echo "Your python path should have '.venv' in it by now. If its from /usr/local/bin, /usr/bin, or /bin, you need to run 'source .venv/bin/activate' again inside your project directory."

Important: Check your CUDA version with nvidia-smi. Change the numbers in e.g. +cu111 to your cuda version. At the time of this post +cu112 versions aren’t availabe, so just use +cu111 if you have cuda 11.2.

pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio===0.8.0 -f https://download.pytorch.org/whl/torch_stable.html

finally, install deep daze.

pip install deep-daze

Okay, now you’re ready to write and run some Python! Here’s a starting file.

To run it, just save it to “run.py” and then run: python run.py

from tqdm import trange

from deep_daze import Imagine

TEXT = 'a female mannequin dressed in a black button - down shirt and white palazzo pants' #@param {type:"string"}
NUM_LAYERS = 44 #@param {type:"number"}
SAVE_EVERY =  20 #@param {type:"number"}
IMAGE_WIDTH = 512 #@param {type:"number"}
SAVE_PROGRESS = True #@param {type:"boolean"}
LEARNING_RATE = 9e-6 #@param {type:"number"}
ITERATIONS = 1050 #@param {type:"number"}
EPOCHS = 8
BATCH_SIZE = 32
GRADIENT_ACCUMULATE_EVERY = 4
model = Imagine(
    text = TEXT,
    num_layers = NUM_LAYERS,
    save_every = SAVE_EVERY,
    image_width = IMAGE_WIDTH,
    lr = LEARNING_RATE,
    iterations = ITERATIONS,
    epochs = EPOCHS,
    save_progress = SAVE_PROGRESS,
    batch_size = BATCH_SIZE,
    gradient_accumulate_every = GRADIENT_ACCUMULATE_EVERY,
    open_folder = False # Set this to true if you want to open the folder you're in to view the files 
)

for epoch in trange(EPOCHS, desc = 'epochs'):
    for i in trange(ITERATIONS, desc = 'iteration'):
        model.train_step(epoch, i)

How could I edit this script so that it doesn’t create a new image each time it updates it and only outputs the start and final image?

Read more comments on GitHub >

github_iconTop Results From Across the Web

How much VRAM do you need? Professional and Gaming ...
How much VRAM you'll need on your Graphics Card for different Software and Games depends a lot on your specific workloads. Find your...
Read more >
Write/Read data to/from VRAM
My understanding of how a GPU works is kinda solid I think. This is what I understood searching on the Internet: VRAM (or...
Read more >
How much VRAM do you need? - Tech Tips - YouTube
Time to put this one to bed. How many gigabytes of graphics memory ( VRAM ) do you REALLY need?Buy Graphics Cards:Canada: ...
Read more >
Not Enough VRAM Can Drastically Hurt Render Performance
In this video I show you how not having enough GPU VRAM can cause a render to take significantly longer. In this example...
Read more >
VRAM vs RAM: The Differences Between These Two - 2022
What is it? Memory chip, Memory chip. Primary Use, Storage of temporary system files, Storage of image data. Technologies Influenced, DDR, DRAM, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found