question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error when using nn.UninitializedParameter

See original GitHub issue

Describe the bug A ValueError is raised when trying to use unavailable operation nelement() on an UninitializedParameter.

summary method goes over all the modules in the model and tries to get the number of parameters, but that’s not possible with an UninitializedParameter.

To Reproduce

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.param = nn.UninitializedParameter()
    
    def init_param(self):
        self.param = nn.Parameter(torch.zeros(1))
    
    def forward(self, x):
        self.init_param()
        return x

net = Net()
torchinfo.summary(net, input_size=(1, 1))

Output First part of the stack trace:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
~\miniconda3\envs\cudalab\lib\site-packages\torchinfo\torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
    260             if isinstance(x, (list, tuple)):
--> 261                 _ = model.to(device)(*x, **kwargs)
    262             elif isinstance(x, dict):

~\miniconda3\envs\cudalab\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
   1108             for hook in (*_global_forward_pre_hooks.values(), *self._forward_pre_hooks.values()):
-> 1109                 result = hook(self, input)
   1110                 if result is not None:

~\miniconda3\envs\cudalab\lib\site-packages\torchinfo\torchinfo.py in pre_hook(***failed resolving arguments***)
    457         info = LayerInfo(var_name, module, curr_depth, idx[curr_depth], parent_info)
--> 458         info.calculate_num_params()
    459         info.check_recursive(summary_list)

~\miniconda3\envs\cudalab\lib\site-packages\torchinfo\layer_info.py in calculate_num_params(self)
    125         for name, param in self.module.named_parameters():
--> 126             self.num_params += param.nelement()
    127             if param.requires_grad:

~\miniconda3\envs\cudalab\lib\site-packages\torch\nn\parameter.py in __torch_function__(cls, func, types, args, kwargs)
    120             return super().__torch_function__(func, types, args, kwargs)
--> 121         raise ValueError(
    122             'Attempted to use an uninitialized parameter in {}. '

ValueError: Attempted to use an uninitialized parameter in <method 'numel' of 'torch._C._TensorBase' objects>. This error happens when you are using a `LazyModule` or explicitly manipulating `torch.nn.parameter.UninitializedParameter` objects. When using LazyModules Call `forward` with a dummy batch to initialize the parameters before calling torch functions

Expected behavior To check if a Module is an instance of an UninitializedParameter and skip calls to unavailable operations.

It would still be nice to show somehow in the printed summary table (maybe ‘uninitialized’?).

Context:

  • OS: Windows 10
  • Python 3.9.7
  • pytorch 1.10.1 (py3.9_cpu_0)
  • torchinfo 1.6.0

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:11 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
notjedicommented, Mar 15, 2022

it’s on me. i’ll do a PR if it’s okay with you @TylerYep? btw, do let me know if you know any other nn class that would cause the same issue.

i guess i can probably do hasattr check before calling nelement(). what do you think?

EDIT: on second though i guess that would not work, cause it seems like the class does have nelement method (haven’t checked it though, but that’s most probably the case). can we do a isinstance check? or do you have any other ideas?

0reactions
TylerYepcommented, May 28, 2022

I added a new commit 8cf0ab26d503f0786b3733be4caad021defc7236 that fills in the parameters for lazy modules (using the is_lazy PyTorch function). Thank you @notjedi for contributing the first fix and @vladvrabie for submitting the issue!

This change will be included in torchinfo v1.7.0.

Read more comments on GitHub >

github_iconTop Results From Across the Web

UninitializedParameter · Issue #7546 · Lightning-AI ... - GitHub
This error happens when you are using a `LazyModule` or explicitly manipulating `torch.nn.parameter.UninitializedParameter` objects.
Read more >
UninitializedParameter — PyTorch 1.13 documentation
nn.Parameter , uninitialized parameters hold no data and attempting to access some properties, like their shape, will throw a runtime error. The only...
Read more >
TensorFlow: "Attempting to use uninitialized value" in variable ...
This can happen if your parameters are stuck in a local minimum. A common error is to initialize all of your weights to...
Read more >
Insight into PyTorch.nn: Parameter vs. Linear vs. Embedding
In a recent PyTorch practice, I used the torch.nn.Parameter() class to create a module parameter but found the parameter was initialized ...
Read more >
Understanding nn.Parameter - Part 1 (2019) - Fast.ai forums
Hello everyone, I am doing lesson2-sgd.ipynb and I do not make ... Later on, It worked fine as soon as I initialized the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found