Why not scale values when attribution values smaller than 1e-5?
See original GitHub issueWhen displaying the attribution, you normalise and scale the values.
However, do you skip normalising if the scaling factor (which is the max value after the outliers) is below 1e-5?
def _normalize_scale(attr: ndarray, scale_factor: float):
if abs(scale_factor) < 1e-5:
warnings.warn(
"Attempting to normalize by value approximately 0, skipping normalization."
"This likely means that attribution values are all close to 0."
)
....
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (4 by maintainers)
Top Results From Across the Web
R Visualization Tips: Using the scales package - Bookdown
A scaling factor: x will be multiplied by scale before formating. This is useful if the underlying data is very small or very...
Read more >normalization - scale a number between a range
Show activity on this post. Your scaling will need to take into account the possible range of the original number. There is a...
Read more >Bayesian Inference via Markov Chain Monte Carlo (MCMC)
This is inherently a Bayesian question: it is about probability of parameters. “Actually best” refers to the true unknown parameter values, and ...
Read more >How many primes are there?
For the smaller values of x in this table (say to 10,000,000,000) the value of π(x) can be found by finding and counting...
Read more >Normalization of ChIP-seq data with control - PMC - NCBI
SPP estimates the background regions by excluding highly “enriched” regions with a small p-value either in the ChIP sample or the control sample...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Use-case My use-case is interpreting robust model (they are trained using adversarial training [1]). Such models are trained on adversarial inputs.
On robust models, the gradients with respect to the input are very small (see picture below), where the s axis represents the attributions before rescaling. Notice that the range is around 1e-3. Using SmoothGrad, the gradients are around 1e-5, 1e-6 -> which creates issues with Captum.
Issue with current warning For people investigation interpretability on robust models, it’s essential to be able to plot them, despite potential errors associated with floating-point arithmetic.
In Jupyter this warning wasn’t printed, which took me hours to dig into Captum and understand why the saliency map was essentially white (because the inputs weren’t scaled)
Potential solution: It would be good to allow power-user to bypass this warning (either through a parameter), or simply disable the check.
[1] https://arxiv.org/pdf/1706.06083.pdf
Thank you for the heads up! @vivekmig, please go ahead as you initially proposed and plan this change for a future release 😃