Un-deprecate randomized_l1?
See original GitHub issueThe users are revolting at #8995 😃
Do we want to:
- keep it deprecated?
- remove the deprecation (until further notice)?
- re-implement it with a different interface, which determines
feature_importances_
and can be wrapped inSelectFromModel
? - review potential improvements to the method?
Issue Analytics
- State:
- Created 6 years ago
- Comments:10 (7 by maintainers)
Top Results From Across the Web
Deprecated and obsolete features - JavaScript - MDN Web Docs
This page lists features of JavaScript that are deprecated (that is, still available but planned for removal) and obsolete (that is, ...
Read more >Deprecating and undeprecating packages or package versions
If you no longer wish to maintain a package, or if you would like to encourage users to update to a new or...
Read more >deprecation - Rule
Rule: deprecation. Warns when deprecated APIs are used. Any usage of an identifier with the @deprecated JSDoc annotation will trigger a warning.
Read more >Deprecate functions and arguments — deprecate_soft • lifecycle
Partially deprecate an argument with "foo(arg = 'must be a scalar integer')" . Deprecate anything else with a custom message by wrapping it...
Read more >How to properly deprecate - DEV Community
Why would I deprecate? Sometimes you release a function or widget with your library and...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
You mean: 3 users are complaining. Sure. Every time we change something, some people will find that the change that not suit them.
I still think that we should deprecate. If people really want it, they can do a contrib package.
The randomized l1 models are just not reliable. They are interesting from a theoretical perspective, but they are unstable and have too many hyper parameters. They are a cost to us, and not a strong benefit to users.
As I mentioned in #8995, I just don’t know of any replacement to do feature selection in high-dimensions. This is just something that it not solved. The fact that there are so many papers but that none of them is used reliably in application tells us something. And before anyone mentions genomics, let me stress that to my knowledge, careful evaluation of sparse methods in genomics [1] has shown that a simple anova (aka T-test) works better.
[1] http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0028210
@GaelVaroquaux In my understanding, the reference paper only applies randomization across the samples. While the randomized lasso also scales the columns randomly. So the paper doesn’t exactly show randomized lasso is inferior.