question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Why is the adjacency matrix taken in to graph convolutional layers each time? and self loops

See original GitHub issue

Hi!

I am building a graph convolutional network which will be used in conjunction with a merged layer for a reinforcement learning task.

I have a technical question about the convolutional layer itself which is slightly confusing to me which is: why is the adjacency matrix passed in to each conv layer and not ONLY the first one? My code is as follows:


adj = nx.to_numpy_array(graph)

node_features = [] #just the degree of the graph nodes
node_degree = nx.degree(damage_graph)
for i in dict(node_degree).values():
    node_features.append(i / len(damage_graph))

node_features_final = np.array(node_features).reshape(-1, 1)


adj_normalised = normalized_adjacency(adj)
adj_normalised = sp_matrix_to_sp_tensor(adj_normalised)
node_feature_shape = 1


nodefeature_input = tf.keras.layers.Input(shape=(node_feature_shape,), name='node_features_input')
adjacency_input = tf.keras.layers.Input(shape=(None,), name='adjacency_input', sparse=True)

conv_layer_one = GCNConv(64, activation='relu')([nodefeature_input, adj_normalised])
conv_layer_one = tf.keras.layers.Dropout(0.2)(conv_layer_one)
conv_layer_two = GCNConv(32, activation='relu')([conv_layer_one, adj_normalised])
conv_layer_pool = GlobalAvgPool()(conv_layer_two)
dense_layer_graph = tf.keras.layers.Dense(128, activation='relu')(conv_layer_pool)

input_action_vector = tf.keras.layers.Input(shape=(action_vector,), name='action_vec_input')
action_vector_dense = tf.keras.layers.Dense(128, activation='relu', name='action_layer_dense')(input_action_vector)

merged_layer = tf.keras.layers.Concatenate()([dense_layer_graph, action_vector_dense])
#output_layer... etc
model = Model([nodefeature_input, adjacency_input], [output_layer])

and my second question is about the normalise_adjacency - it does not add self loops. Should self loops be added before or after normalising the matrix?

thank you!

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:10 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
amjass12commented, May 5, 2022

perfect, thank you very much!! I appreciate all your time 😃

0reactions
amjass12commented, May 5, 2022

sorry @danielegrattarola - yes you are right:

it is:

adj = nx.adjacency_matrix(damage_graph)
adj_preprocessed = GCNConv.preprocess(adj)
adj_preprocessed = sp_matrix_to_sp_tensor(adj_preprocessed)

without the last line, it was giving me a datatype error, so, sp_matrix_to_sp_tensor is necessary after the GCNConv.preprocess line!

Read more comments on GitHub >

github_iconTop Results From Across the Web

What Makes Graph Convolutional Networks Work?
Our adjacency matrix is effectively a sparse matrix where we have rows and columns representing node labels and a binary representation of ...
Read more >
How Graph Neural Networks (GNN) work - AI Summer
Self-loops are added by adding the identity matrix to the adjacency matrix while recomputing the degree matrix. In this case, each layer ...
Read more >
Simplifying Graph Convolutional Networks - arXiv
Basically, it consists of replacing S1-order by the normalized adjacency matrix af- ter adding self-loops for all nodes. We call the resulting propagation ......
Read more >
The Graph Neural Network Model
The first part of this book discussed approaches for learning low-dimensional embeddings of the nodes in a graph. The node embedding approaches we...
Read more >
Understanding Graph Neural Networks - Irhum Shafkat's Blog
The node information can be represented by a feature matrix X X X, formed by stacking together the d d d dimensional feature...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found