Suggestion: Fill with feature mean instead of 0 for VarianceThreshold.inverse_transform
See original GitHub issueToday, sklearn.feature_selection.VarianceThreshold.inverse_transform method fills in zeros to features which were removed for having too-small variance.
This is certainly predictable, easy to implement and easy to explain.
However, filling in zeros without respect to the data passed to fit means that the reconstruction error can become arbitrarily large. For example, suppose that one of the features in your data always takes the value 10**6. This clearly has zero variance, since it always takes the same value; however, filling in zeros for that feature when the data via transform and inverse_transform will result in an output which dramatically differs from the input.
Instead, I think it would make sense to fill in the mean of the columns when using inverse_transform. The means can be computed and stored from the data passed to fit. This will make the reconstruction of the transformed data via inverse_transform more closely reflect the data that was passed to fit because any columns which are removed for having variance less than threshold must, by definition, be tightly grouped about the sample mean.
Naturally, in the special cases where the sample means of input features passed to fit are already zero, the proposed inverse_transform method will function the same way it does today, as it will fill in zero values for those features.
In terms of code, this just means keeping an array of column means in addition to the indices of the removed columns.
Of course, it’s possible that I have missed an important subtlety, or there is a competing concern which outweighs the argument that I’ve outlined here. If that’s the case, I’d like to know what I’ve missed!
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (6 by maintainers)

Top Related StackOverflow Question
So I’m not sure if this change will be beneficial. Why should VarianceThreshold have different behavior from other feature selection methods? You could argue the same for other feature selectors, right?
I’m also not sure I understand the semantics of looking at reconstruction error in feature selection. @Sycor4x can you explain why you’re doing this or what the interpretation of this is?
@jnothman did you mean adding a parameter to the method or the class? Do we want to later deprecate the parameter or keep it?
Somehow I missed the notifications for this. My recollection from several years ago is that I expected
transformandinverse_transformto be inverses. To be explicit, ifZ = obj.fit_transform(X), then my expectations is thatnp.abs(X - obj.inverse_transform(Z).mean()to be small, for instance.The reason I cared, as far as I can recall, is that if I’m applying a sequence of transformations to
X(e.g. aPipeline), and I want to assess how much information is discarded end-to-end, this has a sharp corner because “un-doing” the zero-variance screen can be surprisingly large, and the reason it is so large has more to do with how the back-transformation is performed than the data or the Pipeline.