question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Problem when using timeDistributed with Convolution1D layer

See original GitHub issue

I’m going to use TimeDistributed layer wrapper to combine CNN with LSTM layers. I just followed the link here: https://github.com/jamesmf/mnistCRNN, but using Convolution1D instead of Convolution2D layers. But I got an error when adding the TimeDistributed wrapped Convolution layers on the Embedding layers (shown below). Could you help me to check what is wrong here? Thanks!

model_aa = Sequential() model_aa.add(Embedding(input_dim=1000, output_dim=100, input_length=maxlen, dropout=0.2)) model_aa.add(TimeDistributed(Convolution1D(nb_filter=20, filter_length=5, border_mode='valid', activation='relu', subsample_length=1, init='glorot_normal'))) model_aa.add(TimeDistributed(MaxPooling1D(pool_length=3, border_mode='valid'))) model_aa.add(TimeDistributed(Convolution1D(nb_filter=10, filter_length=5, border_mode='valid', activation='relu', subsample_length=1, init='glorot_normal'))) model_aa.add(TimeDistributed(MaxPooling1D(pool_length=3, border_mode='valid'))) model_aa.add(TimeDistributed(Convolution1D(nb_filter=10, filter_length=3, border_mode='valid', activation='relu', subsample_length=1, init='glorot_normal'))) model_aa.add(TimeDistributed(MaxPooling1D(pool_length=3, border_mode='valid'))) model_aa.add(TimeDistributed(Convolution1D(nb_filter=10, filter_length=3, border_mode='valid', activation='relu', subsample_length=1, init='glorot_normal'))) model_aa.add(Bidirectional(LSTM(15, dropout_W=0.2, dropout_U=0.2, init='glorot_normal')))

IndexError Traceback (most recent call last) <ipython-input-94-70c7027ba53e> in <module>() 1 model_aa = Sequential() 2 model_aa.add(Embedding(input_dim=1000, output_dim=100, input_length=maxlen, dropout=0.2)) ----> 3 model_aa.add(TimeDistributed(Convolution1D(nb_filter=20, filter_length=5, border_mode=‘valid’, activation=‘relu’, subsample_length=1, init=‘glorot_normal’))) 4 model_aa.add(TimeDistributed(MaxPooling1D(pool_length=3, border_mode=‘valid’))) 5 model_aa.add(TimeDistributed(Convolution1D(nb_filter=10, filter_length=5, border_mode=‘valid’, activation=‘relu’, subsample_length=1, init=‘glorot_normal’)))

/home/wjin/anaconda/envs/wjin/lib/python2.7/site-packages/keras/models.pyc in add(self, layer) 310 output_shapes=[self.outputs[0]._keras_shape]) 311 else: –> 312 output_tensor = layer(self.outputs[0]) 313 if type(output_tensor) is list: 314 raise Exception('All layers in a Sequential model ’

/home/wjin/anaconda/envs/wjin/lib/python2.7/site-packages/keras/engine/topology.pyc in call(self, x, mask) 485 ‘layer.build(batch_input_shape)’) 486 if len(input_shapes) == 1: –> 487 self.build(input_shapes[0]) 488 else: 489 self.build(input_shapes)

/home/wjin/anaconda/envs/wjin/lib/python2.7/site-packages/keras/layers/wrappers.pyc in build(self, input_shape) 96 child_input_shape = (input_shape[0],) + input_shape[2:] 97 if not self.layer.built: —> 98 self.layer.build(child_input_shape) 99 self.layer.built = True 100 super(TimeDistributed, self).build()

/home/wjin/anaconda/envs/wjin/lib/python2.7/site-packages/keras/layers/convolutional.pyc in build(self, input_shape) 113 114 def build(self, input_shape): –> 115 input_dim = input_shape[2] 116 self.W_shape = (self.filter_length, 1, input_dim, self.nb_filter) 117 self.W = self.init(self.W_shape, name=‘{}_W’.format(self.name))

IndexError: tuple index out of range

Please make sure that the boxes below are checked before you submit your issue. Thank you!

  • Check that you are up-to-date with the master branch of Keras. You can update with: pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps

  • If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with: pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps

  • Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:15

github_iconTop GitHub Comments

4reactions
thomasmoooncommented, Jun 6, 2018

In my case I solved that via reshaping the input dimension. As described here in the keras api, the input dimension of the 1D-Conv layer must be (batch_size, steps, input_dim). To be able to wrap it with the timeDistributed Layer, one needs to add a dimension, cause the tensor of the combination of timeDistributed(Conv1D(...)) needs to have the shape (batch_size,sequence,steps,input_dim). Hence one needs to reshape the tensor from 3D to 4D. This behaviour is implicitly described in the examples of the timeDistributed documentation.

Example: My input tensor (batch-size, steps, input_dim) has values (?, 12, 1000):

  • 1000 features
  • 12 observations per feature = 12 timepoints

The goals is to apply a 1D convolution over the 12 timepoints with weights shared between features (hence, time-distributed).

Approach: Reshape the (?, 12,1000) tensor to (?, 1000, 12, 1) tensor. After applying a 1D-Conv with 3 Kernels of width 5 the output tensor has shape (?, 1000, 8, 3). More precisely given a np.array X with shape e. g. (1189, 12, 1000) one retrives a tensor of shape (1189,1000,12,1) with X.reshape((-1, 1000, 12, 1))

3reactions
iamjlicommented, May 11, 2017

Ditto

Read more comments on GitHub >

github_iconTop Results From Across the Web

Keras TimeDistributed Conv1D Error - Stack Overflow
I want the conv layer to convolute only on the last dimension, treating the cnn_max_length (num_words) dimension as time step. – user9459060.
Read more >
How to work with Time Distributed data in a neural network
The first problem, the input shape — and the unwanted “image merging” ... Finally, let's use Time Distributed Layers. Time Distributed layer ...
Read more >
Conv1D layer - Keras
When using this layer as the first layer in a model, provide an input_shape argument (tuple of integers or None , e.g. (10,...
Read more >
tf.keras.layers.Conv1D | TensorFlow v2.11.0
The output is the concatenation of all the groups results along the channel axis. Input channels and filters must both be divisible by...
Read more >
Text Classification, Part I - Convolutional Networks
Text classification is a very classical problem. ... This can be easily implemented using Keras Merge Layer.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found