Resuming training on Neural Network
See original GitHub issueHi, Neural Network gives me weird results. Maybe I miss something. Please help.
What is wrong?
Neural Network result is incorrect.
Where does it happen?
I’m running 2.0.0-alpha.12 on Node 13 and Mac.
How do we replicate the issue?
Firstly I tried:
const brain = require('brain.js');
const net = new brain.NeuralNetwork();
const trainingData = {
input: { a: 0, b: 1 },
output: { good: 1 },
};
net.train(trainingData);
const trainingData2 = {
input: { a: 1, b: 0 },
output: { bad: 1 },
};
net.train(trainingData2);
const result = net.run({ a: 0, b: 1 });
console.log(result);
// Output: { good: 0.060428500175476074 }
// I expect: { good: 1 }
Then tried serialising as advised but still doesn’t work:
const brain = require('brain.js');
// first training
const trainingData = {
input: { a: 0, b: 1 },
output: { good: 1 },
};
const net = new brain.NeuralNetwork();
net.train(trainingData);
const stringifiedNet = JSON.stringify(net.toJSON());
// second training
const trainingData2 = {
input: { a: 1, b: 0 },
output: { bad: 1 },
};
const net2 = new brain.NeuralNetwork();
net2.fromJSON(JSON.parse(stringifiedNet));
net2.train(trainingData2);
const stringifiedNet2 = JSON.stringify(net2.toJSON());
// prediction
const net3 = new brain.NeuralNetwork();
net3.fromJSON(JSON.parse(stringifiedNet2));
const result = net3.run({
a: 0,
b: 1,
});
console.log(result);
// Output: { good: 0.07364676147699356 }
// I expect: { bad: 0, good: 1 }
const result2 = net3.run({
a: 1,
b: 0,
});
console.log(result2);
// Output: { good: 0.07043380290269852 }
// I expect: { bad: 1, good: 0 }
How important is this (1-5)?
4
Expected behavior (i.e. solution)
I expect NN to return other results (described in code comments).
Other Comments
You guys do an awesome job building brain.js!
EDIT: Seen this comment on StackOverflow: The keepNetworkIntact has been renamed to reinforce
but can’t see this property in INeuralNetworkTrainingOptions
EDIT2: I read somewhere I should pass all data every time I run train
. Is there any way around it? I train the NN every day with new daily statistics. Seems insane to have to calculate them every time for all the previous days (i.e. past 2 years).
Issue Analytics
- State:
- Created 4 years ago
- Comments:12 (6 by maintainers)
Top Results From Across the Web
Keras: Starting, stopping, and resuming training
In this tutorial, you will learn how to use Keras to train a neural network, stop training, update your learning rate, and then...
Read more >How to resume training in neural networks properly?
I'm working on training a network to identify different kinds of cells. For each experimental batch, I would take my previous model weight,...
Read more >Effective Model Saving and Resuming Training in PyTorch
In this tutorial, we will be taking a look at how to train and save deep learning neural network models effectively.
Read more >Resume Training from Checkpoint Network - MathWorks
This example shows how to save checkpoint networks while training a deep learning network and resume training from a previously saved network.
Read more >Resuming Training and Checkpoints in Python TensorFlow ...
In this video, I show how to halt training and continue with Keras. ... of Applications of Deep Neural Networks for TensorFlow and...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@GELight This is supported, but if new neurons (in this case, aka object keys) are added, this breaks the network’s design.
Are all the keys of the object you are feeding in known in advance?
To achieve a network that can train to learn new things:
net.fromJSON(json)
If you find this functionality not working, then there is a bug, and if you could include the scripting used to find that bug, I can get a quick fix in.
It is a very high priority issue, and I plan on looking at it tomorrow morning.