Question on example for adding a recourse method
See original GitHub issueHi there!
I have a few (3) questions about how to add a new recourse method
- In your notebook example, there is:
def get_counterfactuals(self, factuals: pd.DataFrame):
# this property is responsible to generate and output
# encoded and scaled counterfactual examples
# as pandas DataFrames
return counterfactual_examples
I wonder what you mean exactly by encoded and scaled. Does that mean that they should follow the same encoding of factuals
?
Moreover, should there be exactly 1 counterfactual example per given factual? (I assume that factuals
is a collection of points for which a counterfactual is needed).
-
I see you test the counterfactuals according to 4 distance functions. Do we have info on what distance function is used when the
get_counterfactuals
is called? I can imagine that you’d wish your recourse method to optimize for the distance that is ultimately used for evaluation. -
Is there a way for the recourse method to know what is the range of variability of a feature? E.g., min and max for numerical features based on the training set, and the categorical possibilities for categorical features. Otherwise, I can imagine the black box model could be given an invalid input while searching for the counterfactuals (a too high or too small number or a category that does not exist).
Forgive me if this info is explained somewhere else and I missed it, in which case I’d kindly ask you to point me to it.
Issue Analytics
- State:
- Created a year ago
- Comments:5
Top GitHub Comments
I will close this issue now, but feel free to re-open if issue not fixed.
The idea is to keep the recourse method and the evaluation method separate, having a dynamically changed metric would go against this. It is possible to define a recourse methods with L1, evaluate it with L1, and repeat for L2. Having the metric depend on the recourse method, would force you to do this, and would make it more difficult to e.g. evaluate a L1 method using a L2 metric. So basically we don’t want to decide for the user how exactly they should do evaluation, but rather provide options to do so. As far as I know the above option you described is possible to do fairly easy by hand right now.
I don’t know if this is a satisfactory answer, and sorry for the late reply.