question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

hypotest with poi_test < 1. returns obs CLs = nan (ATLAS-SUSY-2018-06)

See original GitHub issue

Description

Using pyhf.infer.hypotest with poi_test < 1. for ATLAS-SUSY-2018-06 returns a NaN as observed CLs. Moreover, for poi_test > 1. observed and expected CLs seem to be very close.

Expected Behavior

Observed CLs shouldn’t be a NaN for poi_test = 0.999

Actual Behavior

Observed CLs is a NaN for poi_test = 0.999

Steps to Reproduce

Running the following code with pyhf 0.6.0 (with background only json file from ATLAS-SUSY-2018-06) :

import pyhf
pyhf.set_backend(b"pytorch")
import json
import jsonpatch

# ATLAS-SUSY-2018-06 json file
with open("./SUSY-2018-06_likelihoods/BkgOnly.json", "r") as f:
    bkg = json.load(f)
# A sample patch produced by SModelS
patch = [
    {
        "op": "add",
        "path": "/channels/2/samples/0",
        "value": {
            "data": [
                3.155344731368271
            ],
            "modifiers": [
                {
                    "data": None,
                    "type": "normfactor",
                    "name": "mu_SIG"
                },
                {
                    "data": None,
                    "type": "lumi",
                    "name": "lumi"
                }
            ],
            "name": "bsm"
        }
    },
    {
        "op": "add",
        "path": "/channels/3/samples/0",
        "value": {
            "data": [
                21.84465526863173
            ],
            "modifiers": [
                {
                    "data": None,
                    "type": "normfactor",
                    "name": "mu_SIG"
                },
                {
                    "data": None,
                    "type": "lumi",
                    "name": "lumi"
                }
            ],
            "name": "bsm"
        }
    },
    {
        "op": "remove",
        "path": "/channels/1"
    },
    {
        "op": "remove",
        "path": "/channels/0"
    }
]
#Trying to reproduce the issue
llhdSpec = jsonpatch.apply_patch(bkg, patch)
msettings = {'normsys': {'interpcode': 'code4'}, 'histosys': {'interpcode': 'code4p'}}
workspace = pyhf.Workspace(llhdSpec)
model = workspace.model(modifier_settings=msettings)
result = pyhf.infer.hypotest( .9999, workspace.data(model), model, test_stat="qtilde", return_expected=True)
print(result)

returns (tensor(nan), tensor(1.)) increasing poi_test above 1.0 seems to give very close observed and expected CLs :

mu = 1
(tensor(0.9845), tensor(0.9847))
mu = 2
(tensor(0.0070), tensor(0.0070))
mu = 3
(tensor(1.1504e-07), tensor(1.1509e-07))
mu = 4
(tensor(1.0611e-13), tensor(1.0617e-13))
mu = 5
(tensor(1.8170e-20), tensor(1.8179e-20))

P.S. : with numpy as backend, problems seem to appear near poi_test = 0.6 P.P.S : similar issues seem to appear with ATLAS-SUSY-2018-22

Checklist

  • Run git fetch to get the most up to date version of master (updated to 0.6.0)
  • Searched through existing Issues to confirm this is not a duplicate issue
  • Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue
  • Filled out a requirements.txt

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:32 (13 by maintainers)

github_iconTop GitHub Comments

1reaction
matthewfeickertcommented, Sep 24, 2021

Hi, Is anybody looking into this from the pyhf side, or should we contact the ATLAS conveners about that JSON file giving strange results? Thanks, Sabine

I can try to get to this over the weekend or on Monday — I’m sorry that we’ve been slow in responding to all the Issues that people have opened up over the last 2 weeks but the dev team is just a bit backlogged across other work. 😕

Thank you @jackaraz and @sabinekraml for following up on this though. Having people ping on Issues like this helps keep them visible so please don’t stop!

1reaction
kratsgcommented, Apr 29, 2021

If everyone looks ok, please feel free to close the issue. Specifically, scipy/scipy#13009 is one of the big improvements we see in 1.6.0 of scipy.

Read more comments on GitHub >

github_iconTop Results From Across the Web

No results found

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found