question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

torch.nn.Module classes cannot be used in Pipeline

See original GitHub issue

I tried to add color jittering augmentation to the ImageNet training through inserting line torchvision.transforms.ColorJitter(.4,.4,.4) right after RandomHorizontalFlip, but met this error:

numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Failed in nopython mode pipeline (step: nopython frontend)
Untyped global name 'self': Cannot determine Numba type of <class 'ffcv.transforms.module.ModuleWrapper'>

File "../ffcv/ffcv/transforms/module.py", line 25:
        def apply_module(inp, _):
            res = self.module(inp)
            ^

During: resolving callee type: type(CPUDispatcher(<function ModuleWrapper.generate_code.<locals>.apply_module at 0x7f921d4c98b0>))
During: typing of call at  (2)

During: resolving callee type: type(CPUDispatcher(<function ModuleWrapper.generate_code.<locals>.apply_module at 0x7f921d4c98b0>))
During: typing of call at  (2)


File "/home/chengxuz/ffcv-imagenet", line 2:
<source missing, REPL/exec in use?>


Any idea on what’s happening here and how to fix this?

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:14 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
andrewilyascommented, Jan 26, 2022

@vturrisi FFCV also has per-image randomness in its augmentations (so I think the only augmentations that don’t support this are the torchvision ones).

Since it looks like all the FFCV-related problems here are solved, I’ll close this issue for now—feel free to re-open if there’s anything we missed!

1reaction
andrewilyascommented, Jan 22, 2022

Memory is only pre-allocated for FFCV transforms, so the torchvision transforms there are probably allocating memory at each iteration. Rewriting the torchvision transform as an FFCV one will fix this!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Module — PyTorch 1.13 documentation
Applies fn recursively to every submodule (as returned by .children() ) as well as self. Typical use includes initializing the parameters of a...
Read more >
diffusers/pipeline_utils.py at main · huggingface ... - GitHub
Instantiate a PyTorch diffusion pipeline from pre-trained pipeline weights. The pipeline is set in evaluation mode by default using `model.eval()` (Dropout ...
Read more >
torch.nn.modules.module — transformers 4.4.2 documentation
For such :class:`Module`, you should use :func:`torch.Tensor.register_hook` directly on a specific input or output to get the required gradients.
Read more >
PyTorch API — sagemaker 2.93.0 documentation
To use the PyTorch-specific APIs for SageMaker distributed model parallism, ... A sub-class of torch.nn.Module which specifies the model to be partitioned.
Read more >
Understanding and using the class structure for creating ...
Module class of PyTorch. In simple words the super method lets you use all the modules implemented in torch.nn.Module class. What is self...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found