Inconsistent documentation for C parameter in SVM estimators
See original GitHub issueDescription
The current description of the parameter C for sklearn.svm.LinearSVR
is given as:
C : float, optional (default=1.0)
- Penalty parameter C of the error term. The penalty is a squared l2 penalty. The bigger this parameter, the less regularization is used.
which is more verbose than the description given for sklearn.svm.{SVR, SVC, LinearSVC}
:
C : float, optional (default=1.0)
- Penalty parameter C of the error term.
Would it be time to update the other estimators so that they match this more verbose description?
Links
Versions
scikit-learn 0.19.2 (latest stable version)
Issue Analytics
- State:
- Created 5 years ago
- Comments:9 (8 by maintainers)
Top Results From Across the Web
sklearn.svm.SVC — scikit-learn 1.2.0 documentation
Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty....
Read more >What is the influence of C in SVMs with linear kernel?
The C parameter tells the SVM optimization how much you want to avoid misclassifying each training example. For large values of C, the...
Read more >Practical selection of SVM parameters and noise estimation ...
We investigate practical selection of hyper-parameters for support vector machines (SVM) regression (that is, 1-insensitive zone and regularization parameter C) ...
Read more >LIBSVM FAQ
The output of training C-SVM is like the following. ... How does LIBSVM perform parameter selection for multi-class problems?
Read more >SVM-Light: Support Vector Machine - Cornell Computer Science
SVM light is an implementation of Support Vector Machines (SVMs) in C. ... parameter c in sigmoid/poly kernel -u string - parameter of...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Picking up on this issue with @nahsin
That’s one reason we refer to it as “hinge” and “squared hinge” rather than “l1” and “l2” loss