question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Custom Loss function

See original GitHub issue

From now on, we recommend using our discussion forum (https://github.com/rusty1s/pytorch_geometric/discussions) for general questions.

❓ Questions & Help

I would like to optimize the following self supervised loss function image where h indicates the features of the nodes and u and v are two neighboring nodes.

For this purpose, I created the following class:

class SelfSupervLoss(nn.Module):
    def __init__(self):
        super(SelfSupervLoss, self).__init__()
        
    def forward(self, data):
        for s, d in data.edge_index.t().tolist():
            t1 = torch.dot(data.x[s], data.x[d]).detach().clone().requires_grad_(True)
        return torch.sum(t1)

But during the training, the loss remains constant. This may be due to the incorrect implementation of the loss function. Could you briefly check?

Another reason of nonchanging loss could be an optimizer. Maybe when using the self supervised loss function, we need to explicitly set in the optimizer h_u and h_v as parameters to optimize? If yes, how could we do it? Now, I use Adam optimizer in form of optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)

Thank you!

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
rusty1scommented, Aug 22, 2021

The for-loop is unnecessary as well:

def forward(self, x, edge_index):
    src, dst = edge_index    
    return (x[src] * x[dst]).mean()
1reaction
Km3888commented, Aug 22, 2021

The issue seems to be that you’re calling .detach() in the forward pass which prevents gradient computation.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Keras Loss Functions: Everything You Need to Know
A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The ......
Read more >
How to Create a Custom Loss Function | Keras | by Shiva Verma
Creating Custom Loss Function · The loss function should take only 2 arguments, which are target value (y_true) and predicted value (y_pred) ....
Read more >
How To Build Custom Loss Functions In Keras For Any Use ...
Now to implement it in Keras, you need to define a custom loss function, with two parameters that are true and predicted values....
Read more >
Losses - Keras
Loss functions applied to the output of a model aren't the only way to create losses. When writing the call method of a...
Read more >
How to create a custom loss function in Keras | by Dhiraj K
We can create a custom loss function in Keras by writing a function that returns a scalar and takes two arguments: namely, the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found