Resuming uses more VRAM than starting from scratch with same parameters
See original GitHub issueI’m able to run with the following parameters
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 64
HEADS = 62
DIM_HEAD = 64
BATCH_SIZE=16
using the VQGanVAE1024 vae, however when I try to resume with these parameters, I am hit with cuda out of memory errors. I can’t get past one step until I lower the batch size to 10. Any ideas why resuming with the same parameters would be more memory intensive?
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:11 (6 by maintainers)
Top Results From Across the Web
What happens when a game requires more VRAM than what ...
Go to Start > Settings > System.
Read more >VRAM Usage EASY FIX! Fix Crashes, Stutters, Fps ... - YouTube
VRAM Usage EASY FIX! Fix Crashes, Stutters, Fps Drops! (Warzone)- In this Video I show you How to Fix VRAM Usage!
Read more >How to FIX "VRAM Always at MAX FREQUENCY" Problem
Tap to unmute. If playback doesn't begin shortly, try restarting your device. Your browser can't play this video. Learn more. Switch camera.
Read more >How to Increase Dedicated Video RAM (VRAM) in Windows ...
The best way to increase your video RAM is to purchase a new or better graphics card. If you're using integrated graphics and...
Read more >What is VRAM, How to Check it, and Can You Increase It?
Video Random Access Memory is vital for modern computing. As such, this post will show you what VRAM is and how to increase...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The fix looks to be mapping the call to torch.load to the cpu rather than the gpu
Adding this map location argument got rid of the memory errors for me
Edit: from pytorch docs
@lucidrains here ya go 😃 https://github.com/lucidrains/DALLE-pytorch/pull/118
thanks @awilson9