[BUG] profiler crashes when alpha argument is set in torch.add()
See original GitHub issueDescribe the bug profiler crashes when alpha argument is set in torch.add()
To Reproduce
import torch
from deepspeed.profiling.flops_profiler import FlopsProfiler
class MyModel(torch.nn.Module):
def forward(self, input):
return torch.add(input, input, alpha=5)
model = MyModel()
prof = FlopsProfiler(model)
prof.start_profile()
model(torch.ones([10]))
flops = prof.get_total_flops(as_string=True)
Expected behavior correct profile of flops
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
[BUG] profiler crashes when alpha argument is set in torch.add() issue
Describe the bug profiler crashes when alpha argument is set in torch.add(). To Reproduce import torch from deepspeed.profiling.flops_profiler import ...
Read more >PyTorch profiler crash with segment fault #69443 - GitHub
When I use the PyTorch profiler in master branch to do profiling, it always crash with the following code. import os import torch...
Read more >Adapting Models to use TorchScript and Getting them to ...
I have been going through the exercise of taking a widely used model and converting it to TorchScript. In particular, I have been...
Read more >torch — PyTorch master documentation
Returns a tensor filled with the scalar value 1 , with the shape defined by the variable argument sizes . Parameters: sizes (int...)...
Read more >torch.onnx — PyTorch master documentation
Using dictionaries to handle Named Arguments as model inputs. There are two ways to handle models which consist of named parameters or keyword...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Up to you, of course. The big issue was the crash. A little inaccuracy in the estimation doesn’t make a big difference. but I would think that the calculation would be pretty simple: if alpha==1: element_wise else: element_wise + size(second tensor) since it would be multiplying the second tensor by alpha and then adding them.
@sbrody18, the flops computation is an estimation and we prefer to underestimate rather than overcount. As the default value for alpha is 1, I think we can keep the way of counting to be the same as other element-wise operators such as mul.