question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Integration with RayTune is currently somewhat broken (and so it the tutorial)

See original GitHub issue

I’m trying to set up a sobol+eb generation strategy and getting into some issues. Pretty sure I’m missing something very basic, so your help will be much appreciated.

Here’s a snippet of what I’m trying to do:

# helpers for generation strategy creation
def _get_sobol_step(num_trials:int):
    return GenerationStep(
        model=Models.SOBOL,
        num_trials=num_trials,
        min_trials_observed=3,
        max_parallelism=None,
        enforce_num_trials=True,
        model_kwargs={"deduplicate": True, "seed": 123},
        index=0
    )

def _get_eb_step():
    return  GenerationStep(
            Models.EMPIRICAL_BAYES_THOMPSON, 
            num_trials=-1,
            min_trials_observed=0,
            max_parallelism=None, 
            enforce_num_trials=True,
            model_kwargs = {"transforms": TS_trans},
            index=1
        )

def gen_eb_strategy(init_trials:int = 5):    
    sobol, eb = _get_sobol_step(init_trials), _get_eb_step()
    return GenerationStrategy(steps=[sobol, eb], name='SOBOL+EB')


# create AxClient and experiment
eb_strategy = gen_eb_strategy(init_trials=5)
ax_client = AxClient(enforce_sequential_optimization=False, generation_strategy=eb_strategy)

ax_client.create_experiment(
    name="my_experiment",
    parameters=[
        {"name": "lr", "type": "choice", "values": [0.01, 0.02, 0.05], "value_type": "float"},
        {"name": "momentum", "type": "choice", "values": [0, 0.1, 0.2, 0.3, 0.5, 0.8], "value_type": "float"},
        # a lot of additional choice parameters
    ],
    objective_name="acc",
    overwrite_existing_experiment=True
)

ray.init(...)

# define train + evaluation method
def train_character_cnn(parameters: Dict[AnyStr, Any]):

    # training + evaluation logic here
    acc = ...

    track.log(
        acc=acc
    )

# run experiment
tune.run(
    train_character_cnn,
    num_samples=num_samples, 
    search_alg=AxSearch(ax_client),
    verbose=1,  # Set this level to 1 to see status updates and to 2 to also see trial results.
    resources_per_trial={"gpu": 0.25},
    local_dir='/tmp')

I’m getting this exception:

ValueError: StandardizeY transform requires non-empty observation data.

I’m not sure where I should plug in the observation data, and what data exactly I should add.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:7 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
ilcordcommented, Jul 20, 2020

@lena-kashtelyan , just out of curiosity 😃 Pretty sure It’ll come in handy at some point. In the end, I did end up using the default GPEI strategy.

I just want to clarify the issue, since I tried to reproduce it and found that it happens only with a slight modification to the code I posted: When using a hybrid generation strategy (like, sobol + GPEI), the first call to ray.run() will always use the first generation step, no matter what will be the value of num_samples. To switch to the second one, I’ll have to run ray.run() again.

If I’m using only GPEI, or only EB, I’ll get the ValueError: StandardizeY transform requires non-empty observation data. error, which requires preliminary ObservationData.

Sorry for the inconsistency

0reactions
lena-kashtelyancommented, Nov 4, 2020

The fix is now in the latest stable, 0.1.18.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Tutorial: Scalable model training with Ray Tune - YouTube
0:00 · New! Watch ads now so you can enjoy fewer interruptions. Got it.
Read more >
[Tune] Wandb video recording integration is broken - Ray
My theory is that this has something to do with each rllib worker loading its own libraries. I'm not yet sure what the...
Read more >
Beyond Grid Search: Hypercharge Hyperparameter Tuning for ...
Is Ray Tune the way to go for hyperparameter tuning? Provisionally, yes. Ray provides integration between the underlying ML (e.g. XGBoost), the ...
Read more >
Lecture 6: MLOps Infrastructure & Tooling
Notebook “IDE” is primitive, as they have no integration, no lifting, and no code-style correction. Data scientists are not software engineers, and thus,...
Read more >
Learning Ray
Integration with Ray Tune ... Right now your Ray cluster doesn't do much, but that's about to change. ... And the slightly more...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found