Standardizing the documentation for `alpha` in `Ridge` and `Lasso`
See original GitHub issueDescribe the issue linked to the documentation
I’m helping to teach a machine learning course using sklearn. The documentation for alpha
in Ridge
is as follows:
Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or sklearn.svm.LinearSVC. If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number.
The documentation for alpha
in Lasso
is as follows
Constant that multiplies the L1 term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object.
Between the different documentation for alpha in these models, plus the C
being the inverse regularization in the closely related LogisticRegression
there has understandably been some confusion for myself and some students.
Suggest a potential alternative/fix
The two models are closely related, and I think standardizing the description of alpha
to be the same in Ridge
and Lasso
(making appropriate L2/L1 changes respectively) would go a long way to preventing some confusion students have been encountering. The documentation for alpha in Lasso
seems to be most straightforward to me so I would propose using that description in both Lasso
and Ridge
.
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (7 by maintainers)
The API page for each class is automatically generated from the docstring: https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_ridge.py#L603-L617 https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_coordinate_descent.py#L904-L915
@Dpananos please take a look at the PR. Thanks!