iminuit v2.4.0 breaks test_optim
See original GitHub issueDescription
iminuit v2.4.0 (released today 2021-02-10) is breaking the tests for 32b and 64b minuit in the test_minimize tests
_______________ test_minimize[do_grad-minuit-jax-64b-do_stitch] ________________
tensorlib = <class 'pyhf.tensor.jax_backend.jax_backend'>, precision = '64b'
optimizer = <class 'pyhf.optimize.minuit_optimizer'>, do_grad = True
do_stitch = True
@pytest.mark.parametrize('do_stitch', [False, True], ids=['no_stitch', 'do_stitch'])
@pytest.mark.parametrize('precision', ['32b', '64b'], ids=['32b', '64b'])
@pytest.mark.parametrize(
'tensorlib',
[
pyhf.tensor.numpy_backend,
pyhf.tensor.pytorch_backend,
pyhf.tensor.tensorflow_backend,
pyhf.tensor.jax_backend,
],
ids=['numpy', 'pytorch', 'tensorflow', 'jax'],
)
@pytest.mark.parametrize(
'optimizer',
[pyhf.optimize.scipy_optimizer, pyhf.optimize.minuit_optimizer],
ids=['scipy', 'minuit'],
)
@pytest.mark.parametrize('do_grad', [False, True], ids=['no_grad', 'do_grad'])
def test_minimize(tensorlib, precision, optimizer, do_grad, do_stitch):
pyhf.set_backend(tensorlib(precision=precision), optimizer())
m = pyhf.simplemodels.hepdata_like([50.0], [100.0], [10.0])
data = pyhf.tensorlib.astensor([125.0] + m.config.auxdata)
# numpy does not support grad
if pyhf.tensorlib.name == 'numpy' and do_grad:
with pytest.raises(pyhf.exceptions.Unsupported):
pyhf.infer.mle.fit(data, m, do_grad=do_grad)
else:
identifier = f'{"do_grad" if do_grad else "no_grad"}-{pyhf.optimizer.name}-{pyhf.tensorlib.name}-{pyhf.tensorlib.precision}'
expected = {
# numpy does not do grad
'do_grad-scipy-numpy-32b': None,
'do_grad-scipy-numpy-64b': None,
'do_grad-minuit-numpy-32b': None,
'do_grad-minuit-numpy-64b': None,
# no grad, scipy, 32b - never works
'no_grad-scipy-numpy-32b': [1.0, 1.0],
'no_grad-scipy-pytorch-32b': [1.0, 1.0],
'no_grad-scipy-tensorflow-32b': [1.0, 1.0],
'no_grad-scipy-jax-32b': [1.0, 1.0],
# no grad, scipy, 64b
'no_grad-scipy-numpy-64b': [0.49998815367220306, 0.9999696999038924],
'no_grad-scipy-pytorch-64b': [0.49998815367220306, 0.9999696999038924],
'no_grad-scipy-tensorflow-64b': [0.49998865164653106, 0.9999696533705097],
'no_grad-scipy-jax-64b': [0.4999880886490433, 0.9999696971774877],
# do grad, scipy, 32b
'do_grad-scipy-pytorch-32b': [0.49993881583213806, 1.0001085996627808],
'do_grad-scipy-tensorflow-32b': [0.4999384582042694, 1.0001084804534912],
'do_grad-scipy-jax-32b': [0.4999389052391052, 1.0001085996627808],
# do grad, scipy, 64b
'do_grad-scipy-pytorch-64b': [0.49998837853531425, 0.9999696648069287],
'do_grad-scipy-tensorflow-64b': [0.4999883785353142, 0.9999696648069278],
'do_grad-scipy-jax-64b': [0.49998837853531414, 0.9999696648069285],
# no grad, minuit, 32b - not very consistent for pytorch
'no_grad-minuit-numpy-32b': [0.49622172117233276, 1.0007264614105225],
# nb: macos gives different numerics than CI
# 'no_grad-minuit-pytorch-32b': [0.7465415000915527, 0.8796938061714172],
'no_grad-minuit-pytorch-32b': [0.9684963226318359, 0.9171305894851685],
'no_grad-minuit-tensorflow-32b': [0.5284154415130615, 0.9911751747131348],
# 'no_grad-minuit-jax-32b': [0.5144518613815308, 0.9927923679351807],
'no_grad-minuit-jax-32b': [0.49620240926742554, 1.0018986463546753],
# no grad, minuit, 64b - quite consistent
'no_grad-minuit-numpy-64b': [0.5000493563629738, 1.0000043833598724],
'no_grad-minuit-pytorch-64b': [0.5000493563758468, 1.0000043833508256],
'no_grad-minuit-tensorflow-64b': [0.5000493563645547, 1.0000043833598657],
'no_grad-minuit-jax-64b': [0.5000493563528641, 1.0000043833614634],
# do grad, minuit, 32b
'do_grad-minuit-pytorch-32b': [0.5017611384391785, 0.9997190237045288],
'do_grad-minuit-tensorflow-32b': [0.5012885928153992, 1.0000673532485962],
# 'do_grad-minuit-jax-32b': [0.5029529333114624, 0.9991086721420288],
'do_grad-minuit-jax-32b': [0.5007095336914062, 0.9999282360076904],
# do grad, minuit, 64b
'do_grad-minuit-pytorch-64b': [0.500273961181471, 0.9996310135736226],
'do_grad-minuit-tensorflow-64b': [0.500273961167223, 0.9996310135864218],
'do_grad-minuit-jax-64b': [0.5002739611532436, 0.9996310135970794],
}[identifier]
result = pyhf.infer.mle.fit(data, m, do_grad=do_grad, do_stitch=do_stitch)
rtol = 2e-06
# handle cases where macos and ubuntu provide very different results numerical
if 'no_grad-minuit-tensorflow-32b' in identifier:
# not a very large difference, so we bump the relative difference down
rtol = 3e-02
if 'no_grad-minuit-pytorch-32b' in identifier:
# quite a large difference
rtol = 3e-01
if 'do_grad-minuit-pytorch-32b' in identifier:
# a small difference
rtol = 7e-05
if 'no_grad-minuit-jax-32b' in identifier:
rtol = 4e-02
if 'do_grad-minuit-jax-32b' in identifier:
rtol = 5e-03
# check fitted parameters
> assert pytest.approx(expected, rel=rtol) == pyhf.tensorlib.tolist(
result
), f"{identifier} = {pyhf.tensorlib.tolist(result)}"
E AssertionError: do_grad-minuit-jax-64b = [0.500049321731032, 1.0000044174002167]
E assert approx([0.5002739611532436 ± 1.0e-06, 0.9996310135970794 ± 2.0e-06]) == [0.500049321731032, 1.0000044174002167]
E + where approx([0.5002739611532436 ± 1.0e-06, 0.9996310135970794 ± 2.0e-06]) = <function approx at 0x7fb30c6b6e50>([0.5002739611532436, 0.9996310135970794], rel=2e-06)
E + where <function approx at 0x7fb30c6b6e50> = pytest.approx
E + and [0.500049321731032, 1.0000044174002167] = <bound method jax_backend.tolist of <pyhf.tensor.jax_backend.jax_backend object at 0x7fb210064b00>>(DeviceArray([0.50004932, 1.00000442], dtype=float64))
E + where <bound method jax_backend.tolist of <pyhf.tensor.jax_backend.jax_backend object at 0x7fb210064b00>> = <pyhf.tensor.jax_backend.jax_backend object at 0x7fb210064b00>.tolist
E + where <pyhf.tensor.jax_backend.jax_backend object at 0x7fb210064b00> = pyhf.tensorlib
tests/test_optim.py:126: AssertionError
test_minuit_strategy_do_grad
as well as in test_minuit_strategy_global tests
__________________ test_minuit_strategy_global[tensorflow-1] ___________________
self = <pyhf.optimize.minuit_optimizer object at 0x7fb2107be700>
func = <function wrap_objective.<locals>.func at 0x7fb228255a60>
x0 = [1.0, 1.0], do_grad = True, bounds = [(0, 10), (1e-10, 10.0)]
fixed_vals = [], options = {}
def _internal_minimize(
self, func, x0, do_grad=False, bounds=None, fixed_vals=None, options={}
):
minimizer = self._get_minimizer(
func, x0, bounds, fixed_vals=fixed_vals, do_grad=do_grad
)
result = self._minimize(
minimizer,
func,
x0,
do_grad=do_grad,
bounds=bounds,
fixed_vals=fixed_vals,
options=options,
)
try:
> assert result.success
E AssertionError
src/pyhf/optimize/mixins.py:49: AssertionError
During handling of the above exception, another exception occurred:
mocker = <pytest_mock.plugin.MockerFixture object at 0x7fb1e3d59370>
backend = (<pyhf.tensor.tensorflow_backend.tensorflow_backend object at 0x7fb2433ed080>, None)
strategy = 1
@pytest.mark.parametrize('strategy', [0, 1])
def test_minuit_strategy_global(mocker, backend, strategy):
pyhf.set_backend(pyhf.tensorlib, pyhf.optimize.minuit_optimizer(strategy=strategy))
spy = mocker.spy(pyhf.optimize.minuit_optimizer, '_minimize')
m = pyhf.simplemodels.hepdata_like([50.0], [100.0], [10.0])
data = pyhf.tensorlib.astensor([125.0] + m.config.auxdata)
do_grad = pyhf.tensorlib.default_do_grad
> pyhf.infer.mle.fit(data, m)
tests/test_optim.py:217:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/pyhf/infer/mle.py:122: in fit
return opt.minimize(
src/pyhf/optimize/mixins.py:157: in minimize
result = self._internal_minimize(**minimizer_kwargs, options=kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pyhf.optimize.minuit_optimizer object at 0x7fb2107be700>
func = <function wrap_objective.<locals>.func at 0x7fb228255a60>
x0 = [1.0, 1.0], do_grad = True, bounds = [(0, 10), (1e-10, 10.0)]
fixed_vals = [], options = {}
def _internal_minimize(
self, func, x0, do_grad=False, bounds=None, fixed_vals=None, options={}
):
minimizer = self._get_minimizer(
func, x0, bounds, fixed_vals=fixed_vals, do_grad=do_grad
)
result = self._minimize(
minimizer,
func,
x0,
do_grad=do_grad,
bounds=bounds,
fixed_vals=fixed_vals,
options=options,
)
try:
assert result.success
except AssertionError:
log.error(result, exc_info=True)
> raise exceptions.FailedMinimization(result)
E pyhf.exceptions.FailedMinimization: Optimization failed. Estimated distance to minimum too large.
We will need to investigate what’s up and perhaps loosen tolerances for iminuit.
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (7 by maintainers)
Top Results From Across the Web
Changelog - iMinuit
In addition, breaking changes were made to the interface to arrive at a clean minimal state that is easier to learn, safer to...
Read more >Reference - iMinuit
If cl >= 1, it is interpreted as number of standard deviations. For example, cl=3 produces a 3 sigma contour. Values other than...
Read more >— iminuit 2.18.0 compiled with ROOT-v6-25-02-2017 ...
The current 2.x series has introduced breaking interfaces changes with respect to the 1.x series. All interface changes are documented in the changelog...
Read more >Reference — iminuit 1.5.4 documentation
1 print out at the end of MIGRAD/HESSE/MINOS. 2 prints debug messages. errordef: Optional. See errordef for details on this parameter. If set...
Read more >Changelog — iminuit 1.5.4 documentation
1.5.4 (November 21, 2020)¶. Fixed broken sdist package in 1.5.3 ... This release drops Python 2 support and modernizes the interface of iminuit's...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Remember that Minuit’s tolerance and convergence criterion stops the iteration earlier than other minimisers, once the change in the parameter values becomes negligible compared to the parameter errors. This makes sense for statistical fits. Small means 1e-2 to 1e-3 of the error.
The change in the seed changed the fit results a bit, this also required some updates of my unit tests. You need to loosen your tolerances in the unit test or reduce Minuit.tol to get a more accurate minimum.