question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

slow training of LSTM

See original GitHub issue

Hi, I just started using keras. Awesome work! I tried to use LSTM with the following code

model = Sequential()
model.add(LSTM(4096, 512, return_sequences=True))                                                                                                                
model.add(TimeDistributedDense(512, 4096))
model.add(Activation('time_distributed_softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')

And I compared the efficiency with char-rnn, and found that the implementation in keras is about 4 times slower than Karpathy’s (with the same batchsize). Am I doing something wrong? I’ve attached the theano profile result Thanks you!

Issue Analytics

  • State:closed
  • Created 8 years ago
  • Comments:16 (9 by maintainers)

github_iconTop GitHub Comments

2reactions
fcholletcommented, Jul 20, 2015

Training time is heavily dependent on network size and batch size. Is it even the same network (size included) at all?

Also, time_distributed_softmax is deprecated now, use softmax instead.

0reactions
kylemcdonaldcommented, Jul 25, 2015

aha, great! i read through #98 hoping to be able to help, but it seems that implementing stateful training & prediction requires a deeper understanding of theano than i have right now. so i’ll step back and watch from the sidelines 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

deep learning - Why does my LSTM take so much time to train?
The main problem is that training is awfully slow : each iteration of training takes about half a day. Since training usually takes...
Read more >
Training LSTM: 100x Slower on M1 GPU vs. CPU
Summary: Training an LSTM on M1 GPU vs CPU shows an astounding 168x slower training per epoch. This is based on a relatively...
Read more >
Very slow training on GPU for LSTM NLP multiclass ...
Hi,. The training step of LSTM NN consumes 15+ min just for the first epoch. It seems I made a mistake somewhere.
Read more >
What are some useful tips for training LSTM networks? - Quora
First of all, training LSTMs can be inherently slow if you have too many neurons in the hidden layer. The inputs to the...
Read more >
Why is Keras LSTM on CPU three times faster than GPU?
I had started training of neural network and I saw that it is too slow. It is almost three times slower than CPU...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found