Lazy initialization of MelScale.fb throws when loading module
See original GitHub issueLazy initialization refers to the declaration here and the initialization here. When loading saved model (saved with torch.save()), this causes problem:
size mismatch for reduction.mel_scale.fb: copying a param with shape torch.Size([41, 16]) from checkpoint, the shape in current model is torch.Size([0]).
The problem can be sidestepped by running the model at least once before loading weights. But I’d rather not have to rely on such tricks.
Why is this being lazily initialized anyway?
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (5 by maintainers)
Top Results From Across the Web
Lazy module variables--can it be done? - Stack Overflow
I'm trying to find a way to lazily load a module-level variable. Specifically, I' ...
Read more >Lazy Initialization - .NET Framework - Microsoft Learn
CreateInstance when the value property is first accessed. If the type does not have a parameterless constructor, a run-time exception is thrown.
Read more >Modularization of Android Applications with lazy initialization
In the first article, we described the process of splitting an application into modules, what a module should be responsible for and how...
Read more >Lazy Initialization in Spring Boot 2.2 - Baeldung
By default in Spring, all the defined beans, and their dependencies, are created when the application context is created.
Read more >Lazy loading - Wikipedia
Lazy loading is a design pattern commonly used in computer programming and mostly in web design and development to defer initialization of an...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
The lazy evaluation means that we do not have to specify any dimensions when defining MelScale, and they are then inferred at evaluation time against the matrix used, is that right? This would seem like a nice feature to have for MelScale.
However, for MelSpectrogram, we already know the size of the matrix at initialization, and so we could infer the size at that time. Is that correct? If this is so, then we could go for that as a default behavior since waiting does not give us more information.
Going to close this as from #246, we can now load from state_dict for MelSpectrogram in this code snippet https://github.com/pytorch/audio/issues/245#issuecomment-522602035
Also, we can load from state_dict for MelScale (see test_melscale_load_save and test_melspectrogram_load_save)
Concluded that inferring matrix size at runtime for MelScale (i.e. lazy initialization) is a nice property to have as long as it does not impede other use cases. Feel free to reopen if we did not address anything