LSTMTimeStep can't forecast a simple array of numbers...
See original GitHub issueWhat is wrong?
LSTMTimeStep cannot correctly predict a simple array of numbers
Where does it happen?
When I want to use forecast()
How do we replicate the issue?
Just use my code, it’s simple:
const net = new brain.recurrent.LSTMTimeStep();
const trainingConfig = {
// Defaults values --> expected validation
iterations: 10000, // the maximum times to iterate the training data --> number greater than 0
errorThresh: 0.00000000001, // the acceptable error percentage from training data --> number between 0 and 1
log: false, // true to use console.log, when a function is supplied it is used --> Either true or a function
logPeriod: 100, // iterations between logging out --> number greater than 0
learningRate: 0.3, // scales with delta to effect training rate --> number between 0 and 1
momentum: 0.1, // scales with next layer's change value --> number between 0 and 1
callback: null, // a periodic call back that can be triggered while training --> null or function
callbackPeriod: 10, // the number of iterations through the training data between callback calls --> number greater than 0
timeout: Infinity // the max number of milliseconds to train for --> number greater than 0
}
const training = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
net.train([
training
], trainingConfig);
const output = net.forecast(training)
console.log(output);
How important is this (1-5)?
2
Expected behavior (i.e. solution)
It should output 11… (±0.5)
Other Comments
When I use forecast(input), it gives me different results based on the iteration count. That’s what I noticed (first number is iterations second one is the output => should be 11): 500: 13.98 1000: 9.5 2000: 12.87 3000: 10.74 10000: 9.84
Issue Analytics
- State:
- Created 5 years ago
- Comments:9 (6 by maintainers)
Top Results From Across the Web
How to Use Timesteps in LSTM Networks for Time Series ...
A model will be used to make a forecast for the time step, ... The specified number of time steps defines the number...
Read more >javascript - brain.js - predicting next 10 values - Stack Overflow
I did an example and seems to work const net = new brain.recurrent.LSTMTimeStep({ inputSize: 3, hiddenLayers: [10], outputSize: 3 }); //Same ...
Read more >brain-js/Lobby - Gitter
I have tried including all files, but it's not happy: Uncaught TypeError: Cannot read property 'LSTMTimeStep' of undefined. _. Robert Plummer.
Read more >What is the definition of LSTM timestep? - Quora
Timestep = the len of the input sequence. For example, if you want to give LSTM a sentence as an input, your timesteps...
Read more >github.com-BrainJS-brain.js_-_2018-12-27_13-15-52
For training with RNNTimeStep , LSTMTimeStep and GRUTimeStep ... Example using an array of arrays of numbers:```javascriptconst net = new ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@mubaidr: You are using RNN as your first example, which will suffer from exploding or vanishing gradient, which is why both GRU and LSTM were invented (not by me, but in 1997 by Sepp Hochreiter and Jürgen Schmidhuber).
The variance between GRU and LSTM is simply because GRU tries to be a bit more performant, but will always be very close to LSTM. LSTM, though, has been considered the more stable industry standard for recurrent neural networks.
The reason I can say this with confidence is they all three use the same underlying class with only the formula differing.
Now to the original issue for @fipsi03:
Consider: If you never told a child about “11”, only ever helped him to count to “10”, and then one day asked him what was next, what would he say?
The answer: He wouldn’t know, much as your net does not.
The problem you are running into is simply because you are not giving the net enough information to know what is next and it is simply over-fitting to the training data. To get better results, simply add more and meaningful training data.
On or more technical note, recurrent, time step, and feed forward neural networks that exist in brain.js are all supervised learning algorithms. There is planning to add in a new unsupervised reinforcement style api’s, and we welcome help in suggestions and building them, but as it stands they are not here yet.
Normalised training data works fine: http://jsfiddle.net/Theraga/z5b2ghc4/53/