RuntimeError: CUDA error: misaligned address on 2080ti when using ViTL14
See original GitHub issueI’m aware a 2080ti is not optimal for that model, but I’m not getting CUDA out of memory except on more demanding ViTL models. RN50x16 runs fine - but regardless of other models used, I get misaligned address when using ViTL14. Outside of different model combinations and re-installing Pytorch, I haven’t tried anything.
All image(s) finished.
Traceback (most recent call last):
File "prd.py", line 2809, in <module>
do_run(batch_image)
File "prd.py", line 1583, in do_run
for j, sample in enumerate(samples):
File "q:\progrockdiffusion\progrockdiffusion\guided-diffusion\guided_diffusion\gaussian_diffusion.py", line 900, in ddim_sample_loop_progressive
eta=eta,
File "q:\progrockdiffusion\progrockdiffusion\guided-diffusion\guided_diffusion\gaussian_diffusion.py", line 674, in ddim_sample
out = self.condition_score(cond_fn, out_orig, x, t, model_kwargs=model_kwargs)
File "q:\progrockdiffusion\progrockdiffusion\guided-diffusion\guided_diffusion\respace.py", line 102, in condition_score
return super().condition_score(self._wrap_model(cond_fn), *args, **kwargs)
File "q:\progrockdiffusion\progrockdiffusion\guided-diffusion\guided_diffusion\gaussian_diffusion.py", line 400, in condition_score
x, self._scale_timesteps(t), **model_kwargs
File "q:\progrockdiffusion\progrockdiffusion\guided-diffusion\guided_diffusion\respace.py", line 128, in __call__
return self.model(x, new_ts, **kwargs)
File "prd.py", line 1483, in cond_fn
prompt_grad = torch.autograd.grad(clip_losses.sum() * args.clip_guidance_scale[1000 - t_int], x_in)[0] / args.cutn_batches[1000 - t_int]
File "Q:\Users\my_user_name\anaconda3\envs\progrockdiffusion\lib\site-packages\torch\autograd\__init__.py", line 236, in grad
inputs, allow_unused, accumulate_grad=False)
RuntimeError: CUDA error: misaligned address
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "prd.py", line 2944, in <module>
torch.cuda.empty_cache()
File "Q:\Users\my_user_name\anaconda3\envs\progrockdiffusion\lib\site-packages\torch\cuda\memory.py", line 114, in empty_cache
torch._C._cuda_emptyCache()
RuntimeError: CUDA error: misaligned address
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.```
Issue Analytics
- State:
- Created a year ago
- Comments:11 (6 by maintainers)
Top Results From Across the Web
cuda runtime error (74) : misaligned address at /pytorch/aten ...
but when i test same code in turing(titan rtx and 3 x 2080ti setting) or volta(dgx-1 8x tesla v100) with cuda 9 it...
Read more >CUDA error: misaligned address" - can anyone help me with ...
here's the message in full: RuntimeError: CUDA error: misaligned address CUDA kernel errors might be asynchronously reported at some other ...
Read more >Misaligned address in CUDA - Stack Overflow
What the error message means is that the pointer is not aligned to the boundary required by the processor. From the CUDA Programming...
Read more >how to solve error of misaligned address?
When I code with CUDA in ubuntu16.04. I was confronted with misaligned address error. I was confused with the reason and search on...
Read more >Disco Diffusion Errors: CUDA error: misaligned address
Link to the Disco Diffusion notebook: http://discodiffusion.com/Link to the Disco Diffusion subreddit: ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Excellent!
This finally got it it running for me as well. Thanks!!