torch.nn.Module classes cannot be used in Pipeline
See original GitHub issueI tried to add color jittering augmentation to the ImageNet training through inserting line torchvision.transforms.ColorJitter(.4,.4,.4)
right after RandomHorizontalFlip
, but met this error:
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Failed in nopython mode pipeline (step: nopython frontend)
Untyped global name 'self': Cannot determine Numba type of <class 'ffcv.transforms.module.ModuleWrapper'>
File "../ffcv/ffcv/transforms/module.py", line 25:
def apply_module(inp, _):
res = self.module(inp)
^
During: resolving callee type: type(CPUDispatcher(<function ModuleWrapper.generate_code.<locals>.apply_module at 0x7f921d4c98b0>))
During: typing of call at (2)
During: resolving callee type: type(CPUDispatcher(<function ModuleWrapper.generate_code.<locals>.apply_module at 0x7f921d4c98b0>))
During: typing of call at (2)
File "/home/chengxuz/ffcv-imagenet", line 2:
<source missing, REPL/exec in use?>
Any idea on what’s happening here and how to fix this?
Issue Analytics
- State:
- Created 2 years ago
- Comments:14 (4 by maintainers)
Top Results From Across the Web
Module — PyTorch 1.13 documentation
Applies fn recursively to every submodule (as returned by .children() ) as well as self. Typical use includes initializing the parameters of a...
Read more >diffusers/pipeline_utils.py at main · huggingface ... - GitHub
Instantiate a PyTorch diffusion pipeline from pre-trained pipeline weights. The pipeline is set in evaluation mode by default using `model.eval()` (Dropout ...
Read more >torch.nn.modules.module — transformers 4.4.2 documentation
For such :class:`Module`, you should use :func:`torch.Tensor.register_hook` directly on a specific input or output to get the required gradients.
Read more >PyTorch API — sagemaker 2.93.0 documentation
To use the PyTorch-specific APIs for SageMaker distributed model parallism, ... A sub-class of torch.nn.Module which specifies the model to be partitioned.
Read more >Understanding and using the class structure for creating ...
Module class of PyTorch. In simple words the super method lets you use all the modules implemented in torch.nn.Module class. What is self...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@vturrisi FFCV also has per-image randomness in its augmentations (so I think the only augmentations that don’t support this are the torchvision ones).
Since it looks like all the FFCV-related problems here are solved, I’ll close this issue for now—feel free to re-open if there’s anything we missed!
Memory is only pre-allocated for FFCV transforms, so the torchvision transforms there are probably allocating memory at each iteration. Rewriting the torchvision transform as an FFCV one will fix this!