question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

KBinsDiscretizer produces wrong bins with repeated small values

See original GitHub issue

The KBinsDiscretizer will fail to produce quantile discretizations if the input data has most of its entries having the same value and corresponding to the first bin:

import numpy as np
from sklearn.preprocessing import KBinsDiscretizer
kb = KBinsDiscretizer(n_bins=3, encode="ordinal", strategy="quantile")
a = np.array([0,0,0,0,0,0,0,0,1,2]).reshape((-1,1))
kb.fit(a).transform(a)
/home/david/anaconda3/lib/python3.7/site-packages/sklearn/preprocessing/_discretization.py:222: UserWarning: Bins whose width are too small (i.e., <= 1e-8) in feature 0 are removed. Consider decreasing the number of bins.
  'decreasing the number of bins.' % jj)
array([[0.],
       [0.],
       [0.],
       [0.],
       [0.],
       [0.],
       [0.],
       [0.],
       [0.],
       [0.]])

In this case, there’s no reason why the last two elements could not be assigned to the highest bucket instead of being grouped with the rest.

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:7 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
david-cortescommented, Feb 11, 2021

But usually the reason why one wants to use the quantile strategy is because the distribution to be transformed is skewed or multi-modal or something like that, for which uniform is not useful.

Also in that example the right output (assuming quantiles) would be to assign the last two observations to the same bucket, since values greater than zero constitute less than a third of the input.

0reactions
azihnacommented, Feb 18, 2021

I just wanted to clear one wrong thing I said up there: during transform, the bin edges are actually replaced with -inf, inf. As seen here. Sorry for any confusion about that.

Read more comments on GitHub >

github_iconTop Results From Across the Web

sklearn.KBinsDiscretizer return 0 for all bins - Stack Overflow
Trying to create bins using KBinsDiscretizer but it gives me back only zeros annotated bins. # transform the dataset with KBinsDiscretizer ...
Read more >
sklearn.preprocessing.KBinsDiscretizer
Parameters: n_binsint or array-like of shape (n_features,), default=5. The number of bins to produce. Raises ValueError if n_bins < 2 .
Read more >
Preprocessing with sklearn: a complete and comprehensive ...
To give our code some meaning, we'll create a very small data set with three features and five samples. The data contains obvious...
Read more >
Intuition for Binning, KBinsDiscretizer - 16: Scikit-learn 13
The video discusses the intuition behind binning and KBinsDiscretizer in Scikit-learn in Python.Timeline(Python 3.8)00:00 - Outline of ...
Read more >
Feature Engineering in Snowflake
KBinsDiscretizer. Bin continuous data into intervals. There are a couple of choices to make here in the scikit-learn function. The encoder ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found