question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to add data to experiment in Ax when it's not possible to correlate external trial evaluation job to trial run outcomes (metric values)

See original GitHub issue

I am trying to do some online optimization following the modular/devloper-api tutorials online. It seems, however, that most of these tutorials assume that 1) the trial you generate is the one that you evaluate and 2) that you can compute the objective from the arms somehow. Taking the booth metric example (https://ax.dev/tutorials/building_blocks.html) for instance, this assumes that I have some programmatic way of getting the result from the arm parameters.

In actual practice, I have no way of guaranteeing that the trial I generate is actually pushed to prod without modification, so I’m always just observing results. (Much like the “attach_trial / complete_trial” paradigm in the service API.) Moreover, I can’t compute the objective from the arms except perhaps by just storing the (arm, result) pairs somewhere else and then looking these up. However, that seems very inelegant. What would be nice is if I could just attach the result directly to the arms. I see trials allow for metadata - why not arms?

Taking the BoothMetric as an example - what I’d like to do in the for loop is just grab the objective mean + sem from the arm (e.g., mean = arm.metadata[objective_name]) having previously recorded it. I can do this with the trial itself, though, not at the arm level. (I could of course attach some metadata to the trial that has information about all the arms, but this seems inelegant.)

class BoothMetric(Metric):
    def fetch_trial_data(self, trial):  
        records = []
        for arm_name, arm in trial.arms_by_name.items():
            params = arm.parameters
            # What can't I do something like this?
            # mean = arm.metadata[objective_name]
            # sem = arm.metadata['sem'] 
            records.append({
                "arm_name": arm_name,
                "metric_name": self.name,
                "mean": (params["x1"] + 2*params["x2"] - 7)**2 + (2*params["x1"] + params["x2"] - 5)**2,
                "sem": 0.0,
                "trial_index": trial.index,
            })
        return Data(df=pd.DataFrame.from_records(records))

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:10 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
Balandatcommented, Jul 24, 2021

So I don’t think this will work unless you create the appropriate arms on the experiment. Otherwise there is no way for the model to know the parameters for the results of a data object (as the data object does not contain the arm parameterizations). My suggestion would be to create a trial with a custom GeneratorRun that contains the arm with the parametrization that ended up being tested, and then also add the data with the results for that arm.

@lena-kashtelyan does this make sense?

1reaction
Balandatcommented, Jul 24, 2021

On a high level it seems to me that in your setting what you want is not a trial, but take an arm returned by a GeneratorRun, give that to whatever decision pipeline ends up launching some configuration, and then add that configuration as a trial with a custom arm. Otherwise the experiment will have a bunch of proposals that were never actually evaluated (and hence shouldn’t really be trials in the first place). Does this make sense?

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to add data to experiment in Ax when it's not ... - Codesti
How to add data to experiment in Ax when it's not possible to correlate external trial evaluation job to trial run outcomes (metric...
Read more >
How to use Ax with an existing, small dataset of ... - GitHub
How to add data to experiment in Ax when it's not possible to correlate external trial evaluation job to trial run outcomes (metric...
Read more >
Trial Evaluation · Ax
There are 3 paradigms for evaluating trials in Ax. Note: ensure that you are ... evaluation function which takes in parameters and outputs...
Read more >
ML End-to-End Example(Python)
From this Databricks notebook, select File > Upload data to DBFS..., and drag these files to the drag-and-drop target to upload them to...
Read more >
Web service error codes (Microsoft Dataverse) - Power Apps
We are unable to load the application, please contact your Dynamics 365 administrator. 0x8005F242 -2147093950, Name: ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found