question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

result greater than 1 and false ! ( Digit recognition )

See original GitHub issue

A GIF or MEME to give some spice of the internet

I’m starting with Brainjs, and I’d like to do digital number recognition. I have a list of 40K images of 28x28 pixels. This makes an Input of 784 values, Like this :

{ input : { pixel0: 0, pixel1: 0, pixel2: 255, pixel3: 255, pixel4: 0, … } }

The value of each pixel is 0 to 255, white to blac. I also tried with an array, there is no difference in the result

{ input : [0, 0, 255, 0, 0…] }

What is wrong?

The output result is greater than 1 and is totally wrong. However, the error rate is 0.9% during training step.

{ N1: 0.008969820104539394,
  N0: 3.631080502941586e-8,
  N4: 1.5748662463010987e-7,
  N7: 0.0687410980463028,
  N3: 0.0002567432529758662,
  N5: 0.00001275186241400661,
  N8: 0.00001598958988324739,
  N9: 4.807806703865936e-7,
  N2: 0.035766009241342545,
  N6: 0.0015280867228284478 } 

How do we replicate the issue?

Code : https://gist.github.com/lucaspojo/8244dc4d733d5a053cb92b4f3bc63773 Sample of training data : https://gist.github.com/lucaspojo/9a10e4af4f48e1bc1bdae668458c5755 The entire file is 70MB, if necessary I can share it.

How important is this (1-5)?

3

Expected behavior (i.e. solution)

EDIT : I think I have totally forgotten one fundamental thing. The input values must be between 0 and 1, in my example I give it a value between 0 and 255.

I made the correction of my code, the error rate during training went from 0.9% to 0.002%.

[13:34:03] iterations: 1, training error: 0.06892831949742094 [13:34:04] iterations: 2, training error: 0.04353368171559561 [13:34:05] iterations: 3, training error: 0.03535015746493033 [13:34:07] iterations: 4, training error: 0.030196516320904053 [13:34:08] iterations: 5, training error: 0.026896815414783184 [13:34:10] iterations: 6, training error: 0.023825500711139782 [13:34:11] iterations: 7, training error: 0.021304108273752783 [13:34:12] iterations: 8, training error: 0.01930223130633866 [13:34:14] iterations: 9, training error: 0.017322111269441335 [13:34:15] iterations: 10, training error: 0.015641154504538003

But when tested, the result is always greater than 1 and is wrong.

Other Comments

More informations about dataset : https://www.kaggle.com/c/digit-recognizer/data

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:1
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
robertleeplummerjrcommented, Oct 13, 2019

The two items, it seems, in question are:

  1. { N1: 1.6050071272033506e-9,
  2. N5: 6.426664640457602e-7,

These contain an e in them, which is scientific notation for “a very tiny number”. If you run them through javascript you’ll see this, for example:

  1. 6.426664640457602e-7 > 0.001 -> false
  2. 1.6050071272033506e-9 > 0.001 -> false

You can use .toFixed(n) to get a better look at them, if you aren’t used to the scientific notation.

  1. (6.426664640457602e-7).toFixed(10) -> "0.0000006427"
  2. (1.6050071272033506e-9).toFixed(10) -> "0.0000000016"
2reactions
robertleeplummerjrcommented, Oct 13, 2019

You need to normalize from 255 to between 0 and 1. Or you may be able to use a different activation than what is default, namely “sigmoid”. But normalizing will allow everything to train easier, period. This is just standard practice when it comes to neural networks.

Here would be an applicable normalizer:

function normalize(value) {
  const normalized = new Float32Array(value.length);
  for (let i = 0; i < value.length; i++) {
    normalized[i] = value[i] / 255;
  }
  return normalized;
}

And an appropriate de-normalizer:

function denormalize(value) {
  const denormalized = new Float32Array(value.length);
  for (let i = 0; i < value.length; i++) {
    denormalized[i] = value[i] * 255;
  }
  return denormalized;
}
Read more comments on GitHub >

github_iconTop Results From Across the Web

Going beyond 99% — MNIST Handwritten Digits Recognition
However, the model suffers from both high variance and high bias problems with a test set accuracy lower than 98.74%. Let us tackle...
Read more >
Handwritten Digit Recognition using Machine Learning
This article presents recognizing the handwritten digits (0 to 9) from the famous MNIST dataset, comparing classifiers like KNN, PSVM, NN and ...
Read more >
Digit Recognition using CNN (99% Accuracy) - Kaggle
In this kernel, I have created a Deep Convolutional Neural Network (CNN) model to recognize different handwritten digits and classify them. The dataset...
Read more >
Digit Recognition in Curvature Space
As for digit 1, we postulate that this curved shape comes from the fact that it is an average over different orientations of...
Read more >
wrong contours and wrong output of handwritten digit ...
1 Answer 1 · I tried doing this but it messed up the contours and now it is not predicting even a single...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found