Custom Loss function
See original GitHub issueFrom now on, we recommend using our discussion forum (https://github.com/rusty1s/pytorch_geometric/discussions) for general questions.
❓ Questions & Help
I would like to optimize the following self supervised loss function
where h
indicates the features of the nodes and u
and v
are two neighboring nodes.
For this purpose, I created the following class:
class SelfSupervLoss(nn.Module):
def __init__(self):
super(SelfSupervLoss, self).__init__()
def forward(self, data):
for s, d in data.edge_index.t().tolist():
t1 = torch.dot(data.x[s], data.x[d]).detach().clone().requires_grad_(True)
return torch.sum(t1)
But during the training, the loss remains constant. This may be due to the incorrect implementation of the loss function. Could you briefly check?
Another reason of nonchanging loss could be an optimizer. Maybe when using the self supervised loss function, we need to explicitly set in the optimizer h_u
and h_v
as parameters to optimize? If yes, how could we do it?
Now, I use Adam optimizer in form of
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
Thank you!
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (2 by maintainers)
Top GitHub Comments
The for-loop is unnecessary as well:
The issue seems to be that you’re calling .detach() in the forward pass which prevents gradient computation.