question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

torch.nn.Sequential throws exception from test project

See original GitHub issue

With a simple call to Sequential

var a = torch.nn.Sequential();

from unit test it can work, but when call from test project it throws exception.

System.OverflowException: Arithmetic operation resulted in an overflow.at System.lntPtr.op Explicit(lntPtr value)at TorchSharp.PinnedArray 1.CreateArray(lntPtr length)at
TorchSharp.PInvoke.LibTorchSharp.THSNN Module get named parameters(HType module, AllocatePinnedArray allocator1, AllocatePinnedArray allocator2)at TorchSharp.torch.nn.Module. named parameters()at TorchSharp.torch.nn.Module..ctor(lntPtr handle, Nullable'1 boxedHandle,Boolean ownsHandle)
at TorchSharp.torch.nn.Module2..ctor(lntPtr handle, IntPtr boxedHandle)at TorchSharp.Modules.Sequential..ctor(lntPtr handle)
at TorchSharp.torch.nn.Sequential0)
at TorchSharpTest.MainWindow.button3 Click(Object senderRoutedEventArgs e) in
D:\Code\TorchSharpTest TorchSharpTest TorchSharpTest MainWindow.xaml.cs:line 157

try to accesse the variable still got exception

void THSNN_Module_get_named_parameters(const NNModule module, Tensor* (*allocator1)(size_t length), const char** (*allocator2)(size_t length))
{
    auto parameters = (*module)->named_parameters();
    auto size = parameters.size(); // try it here
    Tensor* result1 = allocator1(parameters.size());
    const char** result2 = allocator2(parameters.size());

    for (size_t i = 0; i < parameters.size(); i++)
    {
        result1[i] = ResultTensor(parameters[i].value());
        result2[i] = make_sharable_string(parameters[i].key());
    }
}
System.AccessViolationException: Attempted to read or write protectedmemory. This is often an indication that other memory is corrupt
at
[orchSharp.PInvoke.LibTorchSharp.THSNN Module get named parameters(HType module, AllocatePinnedArray allocator1, AllocatePinnedArray allocator2)at TorchSharp.torch.nn.Module. named parameters()at TorchSharp.torch.nn.Module..ctor(lntPtr handle, Nullable'1 boxedHandle,Boolean ownsHandle)
at TorchSharp.torch.nn.Module 2..ctor(lntPtr handle, IntPtr boxedHandle)at TorchSharp.Modules.Sequential..ctor(lntPtr handle)
at TorchSharp.torch.nn.Sequential0
at TorchSharpTest.MainWindow.button3 Click(Object sender,RoutedEventArgs e) in
D:\Code\TorchSharpTest)TorchSharpTest TorchSharpTest\MainWindow.xaml.cs:line 157

Issue Analytics

  • State:open
  • Created 8 months ago
  • Comments:8 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
xhuan8commented, Jan 19, 2023

I tried version 99.2 it’s gone, maybe because my local code is corrupt.

0reactions
xhuan8commented, Jan 28, 2023

THSNN_Module_get_named_parameters failed to retriver parameters from module, seems the pointer point to some invalid address

System.OverflowException: Arithmetic operation resulted in an overflow.at System.lntPtr.op Explicit(lntPtr value)at TorchSharp.PinnedArray 1.CreateArray(lntPtr length)at
TorchSharp.PInvoke.LibTorchSharp.THSNN Module get named parameters(HType module, AllocatePinnedArray allocator1, AllocatePinnedArray allocator2)at TorchSharp.torch.nn.Module. named parameters()at TorchSharp.torch.nn.Module..ctor(lntPtr handle, Nullable'1 boxedHandle,
Boolean ownsHandle)at TorchSharp.torch.nn.Module2..ctor(IntPtr handle, IntPtr boxedHandle)at TorchSharp.ModulesLinear..ctor(lntPtr handle, IntPtr boxedHandle)at TorchSharp.torch.nn.Linear(lnt64 inputSize, Int64 outputSize, BooleanhasBias, Device device, Nullable'1 dtype)at TorchSharpTest.MainWindow.button3 Click(Object sender,RoutedEventArgs e) inD:\Code TorchSharpTest TorchSharpTest TorchSharpTest MainWindow.xaml.cs
Read more comments on GitHub >

github_iconTop Results From Across the Web

[JIT] nn.Sequential of nn.Module with input type List ...
Bug I'm trying to convert a nn.Sequential model that consist of some nn.Module that had input type List[torch.Tensor] to Scripted model.
Read more >
Tanh throws exception, ReLU does not, why? - vision
The issue is, that when using nn.Tanh activation, and then using “in place addition” throws the following error:.
Read more >
Exception When Using torch::nn::Sequential - C++
I registered a sequential variable and saved the result into a member variable called toPatchEmbedding.
Read more >
torch - Issue with running a single prediction with PyTorch
It seems like your model is not nn.Sequential (pytorch Sequential ), but rather torch.legacy.nn.Sequential (a legacy lua torch model).
Read more >
Guide 3: Debugging in PyTorch
When you start learning PyTorch, it is expected that you hit bugs and errors. To help you debug your code, we will summarize...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found