Why is the adjacency matrix taken in to graph convolutional layers each time? and self loops
See original GitHub issueHi!
I am building a graph convolutional network which will be used in conjunction with a merged layer for a reinforcement learning task.
I have a technical question about the convolutional layer itself which is slightly confusing to me which is: why is the adjacency matrix passed in to each conv layer and not ONLY the first one? My code is as follows:
adj = nx.to_numpy_array(graph)
node_features = [] #just the degree of the graph nodes
node_degree = nx.degree(damage_graph)
for i in dict(node_degree).values():
node_features.append(i / len(damage_graph))
node_features_final = np.array(node_features).reshape(-1, 1)
adj_normalised = normalized_adjacency(adj)
adj_normalised = sp_matrix_to_sp_tensor(adj_normalised)
node_feature_shape = 1
nodefeature_input = tf.keras.layers.Input(shape=(node_feature_shape,), name='node_features_input')
adjacency_input = tf.keras.layers.Input(shape=(None,), name='adjacency_input', sparse=True)
conv_layer_one = GCNConv(64, activation='relu')([nodefeature_input, adj_normalised])
conv_layer_one = tf.keras.layers.Dropout(0.2)(conv_layer_one)
conv_layer_two = GCNConv(32, activation='relu')([conv_layer_one, adj_normalised])
conv_layer_pool = GlobalAvgPool()(conv_layer_two)
dense_layer_graph = tf.keras.layers.Dense(128, activation='relu')(conv_layer_pool)
input_action_vector = tf.keras.layers.Input(shape=(action_vector,), name='action_vec_input')
action_vector_dense = tf.keras.layers.Dense(128, activation='relu', name='action_layer_dense')(input_action_vector)
merged_layer = tf.keras.layers.Concatenate()([dense_layer_graph, action_vector_dense])
#output_layer... etc
model = Model([nodefeature_input, adjacency_input], [output_layer])
and my second question is about the normalise_adjacency - it does not add self loops. Should self loops be added before or after normalising the matrix?
thank you!
Issue Analytics
- State:
- Created a year ago
- Comments:10 (5 by maintainers)
Top Results From Across the Web
What Makes Graph Convolutional Networks Work?
Our adjacency matrix is effectively a sparse matrix where we have rows and columns representing node labels and a binary representation of ...
Read more >How Graph Neural Networks (GNN) work - AI Summer
Self-loops are added by adding the identity matrix to the adjacency matrix while recomputing the degree matrix. In this case, each layer ...
Read more >Simplifying Graph Convolutional Networks - arXiv
Basically, it consists of replacing S1-order by the normalized adjacency matrix af- ter adding self-loops for all nodes. We call the resulting propagation ......
Read more >The Graph Neural Network Model
The first part of this book discussed approaches for learning low-dimensional embeddings of the nodes in a graph. The node embedding approaches we...
Read more >Understanding Graph Neural Networks - Irhum Shafkat's Blog
The node information can be represented by a feature matrix X X X, formed by stacking together the d d d dimensional feature...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
perfect, thank you very much!! I appreciate all your time 😃
sorry @danielegrattarola - yes you are right:
it is:
without the last line, it was giving me a datatype error, so,
sp_matrix_to_sp_tensor
is necessary after theGCNConv.preprocess
line!