question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Loss of precision of parameters in a model file

See original GitHub issue

Double precision floating-point number parameters in svm-train, such as gamma, are written to a model file in a format of %g by svm_save_model in svm.cpp. This causes loss of precision of parameters, hence a slight difference between the kernel function in svm-train and that of svm-predict. The effect on precision of prediction may be small, especially when Qfloat is defined as float, but I think the parameters should be written in a format like %.16g.

Issue Analytics

  • State:open
  • Created 5 years ago
  • Comments:7 (6 by maintainers)

github_iconTop GitHub Comments

2reactions
cjlin1commented, Jul 17, 2018

FYI, in the libsvm 3.23 released 2 days ago this has been corrected.

Tavian Barnes writes:

%.16 should be enough due to the normalized representation
(i.e., 17 digits in total)

%.16g is only 16 digits total. Here’s an example that shows it’s not enough:

#include <stdio.h>

int main() { double a = 18014398509481982.0; double b = 18014398509481980.0; printf(“%.16g\n%.16g\n%s\n”, a, b, a == b ? “true” : “false”); return 0; }

Output:

1.801439850948198e+16 1.801439850948198e+16 false

From the famous “What Every Computer Scientist Should Know About Floating-Point Arithmetic” paper, Theorem 15:

... The same argument applied to double precision shows
that 17 decimal digits are required to recover a double
precision number.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.*

0reactions
cjlin1commented, Apr 6, 2018

I think you are right… We will change the code later Tavian Barnes writes:

%.16 should be enough due to the normalized representation
(i.e., 17 digits in total)

%.16g is only 16 digits total. Here’s an example that shows it’s not enough:

#include <stdio.h>

int main() { double a = 18014398509481982.0; double b = 18014398509481980.0; printf(“%.16g\n%.16g\n%s\n”, a, b, a == b ? “true” : “false”); return 0; }

Output:

1.801439850948198e+16 1.801439850948198e+16 false

From the famous “What Every Computer Scientist Should Know About Floating-Point Arithmetic” paper, Theorem 15:

... The same argument applied to double precision shows that
17 decimal digits are required to recover a double precision
number.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.*

Read more comments on GitHub >

github_iconTop Results From Across the Web

Detect precision loss - MATLAB & Simulink - MathWorks
Precision loss occurs when Simulink software encounters a parameter whose data type does not have enough precision to represent the parameter's value ...
Read more >
Error generating the code: variable too large
Hi, I am generating 3 phase PWM to control an induction motor. I have generated one sine duty cycle and then I delay...
Read more >
Why am I losing precision with large whole numbers (such as ...
The problem is that a float can only store an integer value with up to 7 digits of accuracy (but our id values...
Read more >
How to Calculate Precision, Recall, F1, and More for Deep ...
A figure is created showing two line plots: one for the learning curves of the loss on the train and test sets and...
Read more >
How can I use double as a function parameters without losing ...
Well, if double loses precision then you cannot do much about that besides changing the type of the parameters. · That's when the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found