question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

_cudnn_rnn_backward is not implemented

See original GitHub issue

Hi, I have a following error in my code, however I am using torch 1.3 . RuntimeError: derivative for _cudnn_rnn_backward is not implemented I know that it is pytorch related error. I am wondering in which version of pytorch it has been resolved!

Solved by using following code : with torch.backends.cudnn.flags(enabled=False):

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:4
  • Comments:8 (2 by maintainers)

github_iconTop GitHub Comments

3reactions
tengeryecommented, Jan 18, 2020

Hi, I figured it out. We should place torch.backends.cudnn.flags(enabled=False): before the creation of model and the context should last after the higher derivatives.

0reactions
tengeryecommented, Jan 17, 2020

@nooralahzadeh Yes, I did. The complete function is as follows:

def Rop(y, x, v):
    """
    Computes the product (dy_i/dx_j) v_j: R-operator
    """

    if isinstance(y, tuple):
        ws = [torch.zeros_like(
            y_i).requires_grad_(True) for y_i in y]
    else:
        ws = torch.zeros_like(y).requires_grad_(True)

    with torch.backends.cudnn.flags(enabled=False):
        jacobian = torch.autograd.grad(
            y, x, grad_outputs=ws, create_graph=True)

        Jv = torch.autograd.grad(
            torch.cat([var.flatten() for var in jacobian]),
            ws,
            grad_outputs=torch.cat(v).flatten(), retain_graph=True)

    return tuple([j.detach() for j in Jv])
Read more comments on GitHub >

github_iconTop Results From Across the Web

derivative for _cudnn_rnn_backward is not implemented
I have no idea to solve this problem. I impletemented torch.autograd.grad to get the gradient penalty loss, but this error just show again ......
Read more >
cudnn RNN backward can only be called in training mode - ...
deep learning - RuntimeError: cudnn RNN backward can only be called in training mode - Stack Overflow. Stack Overflow for Teams – Start ......
Read more >
Use of cuDNN RNN
Do you confirm cuDNN already implements stacked rnn when num_layer > 1? (no need to call num_layer times forward/backward methods) ...
Read more >
Recurrent Neural Networks (RNN) with Keras
# CuDNN is only available at the layer level, and not at the cell level. # This means `LSTM(units)` will use the CuDNN...
Read more >
pytorch填坑:RuntimeError: cudnn RNN backward can only ...
运行pytorch时,训练很正常,但是如果切换到eval()模式之后再继续训练, 发现报错:RuntimeError: cudnn RNN backward can only be called in ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found