implementaion about knowledge graph attention
See original GitHub issueFrom now on, we recommend using our discussion forum (https://github.com/rusty1s/pytorch_geometric/discussions) for general questions.
❓ Questions & Help
Hi, I’d like to implement the function as follows:
That is, I need to calculate the attention alpha
between the head entity and relations, and need to calculate the attention belta
between relation and tail entity.
So, How can I use pyg
to implement attention score in terms of a part of neighbors, not all neighbors?
Note: I can implement it in the numerator, but it’s not clear how do I compute the denominator
The above is the explanation of the Dimension of the tensor in the softmax function where num_edge is the number of all edge_index.
Looking forward to your help! Thanks a lot!
Issue Analytics
- State:
- Created 2 years ago
- Comments:10 (3 by maintainers)
Top Results From Across the Web
KGAnet: a knowledge graph attention network for enhancing ...
In this paper, we propose a novel joint training framework that consists of a modified graph attention network, called the knowledge graph ......
Read more >DisenKGAT: Knowledge Graph Embedding with Disentangled ...
In this work, we propose a novel Disentangled Knowledge Graph Attention Network (DisenKGAT) for KGC, which leverages both ...
Read more >KGAT: Knowledge Graph Attention Network for ... - GitHub
Knowledge Graph Attention Network (KGAT) is a new recommendation framework tailored to knowledge-aware personalized recommendation. Built upon the graph neural ...
Read more >MRGAT: Multi-Relational Graph Attention Network for ...
Graph neural networks (GNNs) are popular and promising embedding models which can exploit and use the structural information of neighbors in knowledge graphs....
Read more >Time-aware Relational Graph Attention Network for Temporal ...
Embedding-based representation learning approaches for knowledge graphs (KGs) have been mostly designed for static data. However, many KGs involve temporal ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Thank a lot for your patience. I have tried a trick and address it. The main idea is to control the index in order to using softmax function. We just need to prepare some index in advance which will not affect the training speed. And I made a custom adjustment to your softmax function:
The detailed attention computing procedure as follows:
Focusing on the example in the figure, the final result is :
Overall, Thanks a lot for your patience with sharing!
Sorry to reply so late. However, I failed to reproduce it…