question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[question] Why can't I build a cnn policy behaving like a mlp policy ?

See original GitHub issue

Hi, I hope this is the right place for this question. I’m creating a custom policy for a project and for multiple reasons I wanted to build a convolutional neural network that’s on paper the same as a mlp with three hidden layers [128, 64, 64].

My mlp policy is working fine, but I can’t reproduce these results with a cnn policy, even though I’ve dug into the functions I use and it should work like a mlp.

Here is my custom cnn policy : (n_arrays is 1 for now, this parameter is here because the reason I wanted to build a cnn extractor was to mimic multiple mlp extractors when my obs space was multiple arrays)

def custom_cnn(scaled_images, **kwargs):
    activ = tf.nn.relu
    n_arrays = scaled_images.shape[1]
    filter_width = scaled_images.shape[2]

    layer_1 = activ(conv(scaled_images, 'c1', n_filters=128, 
    filter_size=(1, filter_width), stride=1, init_scale=init_scale, **kwargs))
    layer_1 = tf.reshape(layer_1, [-1, n_arrays, 128, 1])

    layer_2 = activ(conv(layer_1, 'c2', n_filters=64, filter_size=(1, 128), 
    stride=1, init_scale=init_scale, **kwargs))
    layer_2 = tf.reshape(layer_2, [-1, n_arrays, 64, 1])

    layer_3 = activ(conv(layer_2, 'c3', n_filters=64, filter_size=(1, 64), 
    stride=1, init_scale=init_scale, **kwargs))
    layer_3 = tf.reshape(layer_3, [-1, n_arrays, 64, 1])
    layer_3 = conv_to_fc(layer_3)
    return layer_3

So basically each time I’m doing a convolution, it is on an image of shape (1, width), and I’m doing it with a kernel of shape (1, width), with n filters, which should be equivalent to a fully connected layer of size n.

However I get terrible results with such a policy compared to the mlp one. What have I got wrong ? I’m positive I haven’t made a stupid mistake about the shape of my arrays, so why are those two implementions so different during training ?

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:8

github_iconTop GitHub Comments

1reaction
vbeluscommented, Aug 29, 2019

Thank you, creating a custom FeedForwardPolicy class to set scale to False indeed gives me results similar to mlp.

1reaction
araffincommented, Aug 28, 2019

So do you mean it is enough to just provide the float input data, and put scale=False, and then it should work?

That depends on what you mean by “it works”. Will you avoid to do a second normalization? yes. Will it succeed to solve the task? maybe not (you may need hyperparameter tuning). Also, CNN assumes some locality property on your input data (like images). It seems that the data @vbelus is working on are not really images (1D vector) and usually mlp works fine on that type of data.

do I need to rewrite the whole FeedForwardPolicy class ?

It seems you need to either write a custom FeedForwardPolicy class (this should not be too hard) or make sure the data you provide look like images, so that the normalization that is applied does not break the learning.

As fully convolutional networks are useful for many more applications than image analysis

fully convolutional networks for RL? you mean convolution? I agree for convolution that are not 2D.

The flag is set automatically because the CnnPolicy only uses conv 2D layers afterward, and in most cases when using RL, this correspond to images. However, it could be good to add a 1D convolution/other convolution type as feature extractor (and maybe add an example of a CNN using 1D convolution in the documentation): this would have also the effect of disabling the normalization. See here for what I’m talking about: https://github.com/hill-a/stable-baselines/blob/7048a63841a3808a14ae5bc8643ee8a83ae64a21/stable_baselines/common/policies.py#L559

Read more comments on GitHub >

github_iconTop Results From Across the Web

When to Use MLP, CNN, and RNN Neural Networks
Although not specifically developed for non-image data, CNNs achieve state-of-the-art results on problems such as document classification used ...
Read more >
Policy Networks — Stable Baselines 2.10.3a0 documentation
Stable-baselines provides a set of default policies, that can be used with most ... MlpPolicy, Policy object that implements actor critic, using a...
Read more >
Multi-layer perceptron vs deep neural network - Cross Validated
One can consider multi-layer perceptron (MLP) to be a subset of deep neural networks (DNN), but are often used interchangeably in literature ...
Read more >
Review of deep learning: concepts, CNN architectures ...
In this paper, an overview of DL is presented that adopts various perspectives such as the main concepts, architectures, challenges, ...
Read more >
Top Deep Learning Interview Questions and Answers for ...
Neural Networks are used in deep learning algorithms like CNN, RNN, GAN, etc. 3. What Is a Multi-layer Perceptron(MLP)?. As in Neural Networks, ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found