questions about input layer (model summary)
See original GitHub issueHi, I’m new to graph cnn and doing some stellargraph tutorial. And I ran the GraphSAGE Cora Node Classification Example, graphsage-cora-example.py
.
This is the part of the model summary.
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) [(None, 20, 1433)] 0
__________________________________________________________________________________________________
input_3 (InputLayer) [(None, 200, 1433)] 0
__________________________________________________________________________________________________
input_1 (InputLayer) [(None, 1, 1433)] 0
__________________________________________________________________________________________________
I have two questions.
- Why are there multiple input layers?
- What are these numbers of output shape indicates? I know the number 1433 is come from unique words of cora dataset.(right?)
I read the original paper of GraphSAGE, but still I don’t understand…
Issue Analytics
- State:
- Created 4 years ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
Keras Layer Input Explanation With Code Samples - WandB
Simple answers to common questions related to the Keras layer arguments, including input shape, weight, units and dim. With examples.
Read more >layers.Inputlayer does not show up in model.summary() #35515
Input layer merely decides the input shape and is not necessarily a layer. You may try using functional api to explicitly see it...
Read more >Keras' model.summary() not reflecting the size of the input layer?
A Dense layer is applied to the last dimension of your input. In your case it is 1, instead of 784. What you...
Read more >What is a Keras model and how to use it to make predictions
The input layer takes a shape argument that is a tuple representing the dimension of the input data.
Read more >Top Deep Learning Interview Questions and Answers for 2023
Uncover the Deep Learning Interview Questions which cover the questions on ... This model features a visible input layer and a hidden layer...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@mimisen-boop yes, there can be an arbitrary number of layers in the model. However, adding more layers means increasing the “expressive power” of the resulting model (as each layer comes with its own learnable weights), and might lead to overfitting and worse generalisation power - e.g., see Figure 5 in the GCN paper. In our demos, we just chose 2-layer models as “good enough” models for demo purposes. But in general you are right - the number of layers, dimensionality of their outputs, etc. are model hyperparameters, and should be tuned to the dataset and the problem being solved, using proper hyperparameter tuning protocols.
@mimisen-boop to change the number of layers in that example, you’d need to change the
--neighbour_samples
and--layer_size
arguments. E.g., to have a 3-layer model, one can set them to something like--layer_size 20 20 20
and--neighbour_samples 20 10 5
, or whatever (these values are hyperparameters too). The notebookdemos/node-classification/graphsage/graphsage-cora-node-classification-example.ipynb
might be clearer on that.