question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

NameError: name 'str2optimizer8bit_blockwise' is not defined

See original GitHub issue

Describe the bug

Trying to run migrate colab scripts to runpod, followed along then ran into this error (at Run Training section)

Reproduction

Running colab ipynb scripts on Jupyter Lab. First it says num_processes undefined, I specified num_processes=1 in Run Training section, then it returns this error

Logs

--------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
/tmp/ipykernel_10686/1781923440.py in <module>
      1 #@title Run training
      2 import accelerate
----> 3 accelerate.notebook_launcher(training_function, num_processes=1, args=(text_encoder, vae, unet))
      4 with torch.no_grad():
      5     torch.cuda.empty_cache()

/opt/conda/lib/python3.7/site-packages/accelerate/launchers.py in notebook_launcher(function, args, num_processes, use_fp16, mixed_precision, use_port)
    132                 print("Launching training on CPU.")
    133             with patch_environment(use_mps_device=use_mps_device):
--> 134                 function(*args)
    135 
    136 

/tmp/ipykernel_10686/1900933013.py in training_function(text_encoder, vae, unet)
    134                 if accelerator.sync_gradients:
    135                     accelerator.clip_grad_norm_(unet.parameters(), args.max_grad_norm)
--> 136                 optimizer.step()
    137                 optimizer.zero_grad()
    138 

/opt/conda/lib/python3.7/site-packages/accelerate/optimizer.py in step(self, closure)
    138                 self._is_overflow = scale_after < scale_before
    139             else:
--> 140                 self.optimizer.step(closure)
    141 
    142     def _switch_parameters(self, parameters_map):

/opt/conda/lib/python3.7/site-packages/torch/optim/optimizer.py in wrapper(*args, **kwargs)
    107                 profile_name = "Optimizer.step#{}.step".format(obj.__class__.__name__)
    108                 with torch.autograd.profiler.record_function(profile_name):
--> 109                     return func(*args, **kwargs)
    110             return wrapper
    111 

/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
     25         def decorate_context(*args, **kwargs):
     26             with self.clone():
---> 27                 return func(*args, **kwargs)
     28         return cast(F, decorate_context)
     29 

/opt/conda/lib/python3.7/site-packages/bitsandbytes/optim/optimizer.py in step(self, closure)
    263                     self.init_state(group, p, gindex, pindex)
    264 
--> 265                 self.update_step(group, p, gindex, pindex)
    266 
    267         return loss

/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
     25         def decorate_context(*args, **kwargs):
     26             with self.clone():
---> 27                 return func(*args, **kwargs)
     28         return cast(F, decorate_context)
     29 

/opt/conda/lib/python3.7/site-packages/bitsandbytes/optim/optimizer.py in update_step(self, group, p, gindex, pindex)
    521                 config["weight_decay"],
    522                 gnorm_scale=gnorm_scale,
--> 523                 skip_zeros=config["skip_zeros"],
    524             )
    525 

/opt/conda/lib/python3.7/site-packages/bitsandbytes/functional.py in optimizer_update_8bit_blockwise(optimizer_name, g, p, state1, state2, beta1, beta2, eps, step, lr, qmap1, qmap2, absmax1, absmax2, weight_decay, gnorm_scale, skip_zeros)
    856 
    857     if g.dtype == torch.float32 and state1.dtype == torch.uint8:
--> 858         str2optimizer8bit_blockwise[optimizer_name][0](
    859             get_ptr(p),
    860             get_ptr(g),

NameError: name 'str2optimizer8bit_blockwise' is not defined

System Info

ubuntu 20.04 on runpod docker

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:8 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
rexroth0619commented, Nov 1, 2022

Hi team, thanks for coming in.

I ended up figuring it out myself. Since I’m running a docker container on runpod, bitsandbytes weren’t able to find CUDA under the normal installed directory /usr, since it’s installed in the base conda environment (under /opt/conda/lib). By directing it to that folder the code was able to run.

0reactions
sandyscommented, Dec 12, 2022

fixed this. in case anyone else has an issue - they can refer to my notebook https://github.com/sandys/SimpleDiffuserDreambooth/blob/main/DreamBooth_Stable_Diffusion.ipynb

works on vast.ai and runpod

Read more comments on GitHub >

github_iconTop Results From Across the Web

NameError: name 'List' is not defined - python - Stack Overflow
If I try importing List first I get an error No module named 'List' . I'm using Python 3.7.
Read more >
NameError: Name plot_cases_simple is Not Defined
In this section, you'll see how to fix the "NameError: Name is Not Defined" error in Python. I've divided this section into sub-sections...
Read more >
Python nameerror name is not defined Solution - Career Karma
A NameError is raised when you try to use a variable or a function name that is not valid. In Python, code runs...
Read more >
Python Errors: Nameerror name is not defined and more
I am a software engineer with 10 years of experience. I am highly experienced in Java and AWS Cloud. I also work with...
Read more >
NameError: name 'os' is not defined - LearnDjango.com
Starting with Django 3.1, the startproject command generates a settings.py file that imports pathlib rather than os on the top line. The quick ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found