Add n_samples argument to FeaturePermutation like it exists in ShapleyValueSampling
See original GitHub issue🚀 Feature
~Add n_samples
argument for all perturbation based methods like it exists in ShapleyValueSampling.~
Add n_samples
argument to FeaturePermutation like it exists in ShapleyValueSampling.
Motivation
~Perturbation based algorithms~ FeaturePermutation computes feature attribution by perturbing input features. So, depending in how is perturbed, feature attribution varies. To make a more robust estimation, perturbations should happen several times. For example, scikit-learn feature permutation function repeats permutation 5 times by default.
Finally, it’d be consistent with ShapleyValueSampling.
Pitch
~Implement n_samples
argument for other perturbation algorithm in addition than ShapleyValueSampling.~
Implement n_samples
argument FeaturePermutation.
Alternatives
Do it manually by subclassing every algorithm (FeaturePermutation
, ~FeatureAblation
and Occlusion
~) and overriding attribute
method. Then, the new attribution method would call several times the base implementations and average the results.
Additional context
EDIT: change n_samples
feature request to only FeaturePermutation as it doesn’t make sense in FeatureAblation and Occlusion algorithms.
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (3 by maintainers)
@vivekmig Rereading again FeatureAblation and Occlusion, I think you are right and n_samples argument doesn’t make sense. I mainly used FeaturePermutation. As FeatureAblation and Occlusion were part of the same family (perturbation algorithms), I jumped to the conclusions and assume that they’d benefit from n_samples. Sorry for the misunderstanding.
That’s nice!
EDIT: tag the right person, sorry vishwakftw!
@hal-314, you seem to have tagged the wrong person. 😃