question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

In an attempt to structure our discussion I suggest to use this issue to collect a wishlist of how we would like to use Sacred from a birds-eye perspective. I suggest that we edit this issue to reflect the evolving consensus that (hopefully) emerges from the discussion below. To get things started I can think of 3 basic workflows, that I would love for sacred to support. Maybe this is also a good place to think about how to integrate stages and superexperiments.

Interactive (Jupyter Notebook)

Manually control the stages of the experiment / run in an interactive environment. Most suitable for exploration and low complexity experiments. Something like:

# -----------------------------------------------------------
# initialization
ex = Experiment('my_jupyter_experiment')
ex.add_observer(FilestorageObserver('tmp'))
# -----------------------------------------------------------
# Config and Functions
cfg = Configuration()
cfg.learn_rate = 0.01
cfg.hidden_sizes = [100, 100]
cfg.batch_size = 32

@ex.capture
def get_dataset(batch_size):
    ....
# -----------------------------------------------------------
# run experiment
ex.start()   # finalizes config, starts observers
data = get_dataset()  # call functions 
for i in range(1000):
    # do something 
    ex.log_metric('loss', loss)  # log metrics, artifacts, etc.

ex.stop(result=final_loss)
# -----------------------------------------------------------

Scripting

Using a main script that contains most of the experiment and is run from the commandline. This is the current main workflow, most suitable for low to medium complexity experiments.

ex = Experiment('my_experiment_script')

@ex.config
def config(cfg):
    cfg.learn_rate = 0.01
    ...

@ex.capture
def get_dataset(batch_size):
    ....

@ex.automain  # define a main function which automatically starts and stops the experiment
def main():
    ....   # do stuff, log metrics, etc.
    return final_loss

Object Oriented

This is a long-standing feature request #193. Define an experiment as a class to improve modularity (and support frameworks like ray.tune). Should cater to medium to high complexity experiments. Very tentative API sketch:

class MyExperiment(Experiment):
    def __init__(self, config=None):   # context-config to deal with updates and nesting
         super().__init__(config)
         self.learn_rate = 0.001   # using self to store config improves IDE support
         ...

    def get_dataset(self):  # no capturing because self gives access to config anyways
        return ...

    @main   # mark main function / commands 
    def main_function(self):
         ...   # do stuff
         return final_loss

ex = MyExperiment(config=get_commandline_updates())
ex.run()

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:27 (6 by maintainers)

github_iconTop GitHub Comments

2reactions
thequilocommented, Sep 22, 2019

For “reinterpret the concept of calling a command”: It would be useful for any kind of parallel experiment, e.g., MPI

1reaction
flukeskywalkercommented, Sep 27, 2019

We may have to agree to disagree that this is the proper abstraction 😃 though it may certainly have its merits!

As hyperparameter optimization gets more sophisticated, it seems more reasonable to me to consider each hyperparameter optimization run an experiment itself that should be fully reproducible. From that perspective, each trial need not be a separate experiment with its own independent observers. Instead, observations should ideally be made at the level of the hyperparameter optimizer. The current ray tune design may be a bad fit for this though – I’m not sure.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to Streamline Your Workflows: Sacred + neptune.ai ...
Sacred is a tool to configure, organize, log, and reproduce computational experiments. It is designed to introduce only minimal overhead, while encouraging ...
Read more >
How to Streamline Your Workflows – Sacred + Neptune ...
You will learn how to use Neptune + Sacred to streamline your workflows.Check the documentation https://buff.ly/3CMSCiP With Neptune + ...
Read more >
Improve your workflow by managing your machine learning ...
We use Sacred decorators in our model training script. That's it! The tool automatically stores information about the experiment for each run. Today...
Read more >
1.7 Integration with Sacred - signac documentation
Here we demonstrate how to integrate a sacred experiment with signac-flow. Assuming the following sacred experiment defined in a experiment.py module:.
Read more >
Florida - physicians - Ascension
How to Admit a Patient to Ascension Sacred Heart Pensacola hospital ... Ascension Sacred Heart Physician Portal Access ... AUC Workflow (FLPEN).
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found