Training v2 text encoder on free colab gives an error
See original GitHub issueIm training v2 dreambooth on 768x (but error is still even on 512x)
After first step of text encoder training it give this
Sry for the format, it disappears fast
I tried again like 20 times, but every time got the same error
OFFTOP
When i try to train TI with v2 768x, i got this
Training at rate of 0.005 until step 500 Preparing dataset... 100% 20/20 [00:17<00:00, 1.11it/s] 0% 0/500 [00:00<?, ?it/s]Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 328, in train_embedding loss = shared.sd_model(x, c)[0] / gradient_step File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 846, in forward return self.p_losses(x, c, t, *args, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 903, in p_losses logvar_t = self.logvar[t].to(self.device) RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
Issue Analytics
- State:
- Created 9 months ago
- Comments:6 (2 by maintainers)
maybe this will help you
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5523#issuecomment-1343041303
The TI problem are Automatic1111 problem.
The Cuda error probably Ben can solve.