question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Omniboard-ish frontend

See original GitHub issue

Hi there, First, I want to thank you for such a good tool as optuna.

I’ve been testing optuna and it seems that it requires of a frontend. In case of sacred, I saw omniboard as a nice option.

I tested trying merge to both with the following code:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-


from argparse import Namespace
import logging

import os
from sacred import Experiment
from sacred.stflow import LogFileWriter
import sys


import chainer
from chainer import configuration
from chainer import functions as F
from chainer import links as L
from chainer import serializers

from chainer.dataset import convert


from sacred.utils import (
    print_filtered_stacktrace,
    ensure_wellformed_argv,
    SacredError,
    format_sacred_error,
    PathType,
    get_inheritors,
)
from sacred.commands import (
    help_for_command,
    print_config,
    print_dependencies,
    save_config,
    print_named_configs,
)
from docopt import docopt
from sacred.arg_parser import format_usage, get_config_updates

class MLP(chainer.Chain):

    def __init__(self, n_units, n_out):
        super(MLP, self).__init__()
        with self.init_scope():
            # the size of the inputs to each layer will be inferred
            self.l1 = L.Linear(None, n_units)  # n_in -> n_units
            self.l2 = L.Linear(None, n_units)  # n_units -> n_units
            self.l3 = L.Linear(None, n_out)  # n_units -> n_out

    def forward(self, x):
        h1 = F.relu(self.l1(x))
        h2 = F.relu(self.l2(h1))
        return self.l3(h2)


ex = Experiment()
logging.basicConfig(format='%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s')
ex.logger = logging.getLogger('my_custom_logger')


@ex.config
def config_eval():
    batchsize = 100
    epoch = 20
    device = -1
    out = 'result'
    resume = None
    unit = 1000 # [500, 1000]
    trial = None


@ex.main
def main_routine(_run, _config, _log):
    args = Namespace()
    for _value in [x for x in _config if '__' not in x]:
        if 'List' in str(type(_config[_value])):
            setattr(args, _value, list(_config[_value]))
        else:
            setattr(args, _value, _config[_value])
    
    trial = args.trial

    train_set, test_set = chainer.datasets.get_mnist()
    device = chainer.get_device(args.device)
    _log.info('Device: {}'.format(device))
    _log.info('# unit: {}'.format(args.unit))
    _log.info('# Minibatch-size: {}'.format(args.batchsize))
    _log.info('# epoch: {}'.format(args.epoch))

    model = L.Classifier(MLP(args.unit, 10))
    model.to_device(device)
    device.use()

    optimizer = chainer.optimizers.Adam()
    optimizer.setup(model)
    train_count = len(train_set)
    test_count = len(test_set)

    train_iter = chainer.iterators.SerialIterator(train_set, args.batchsize)
    test_iter = chainer.iterators.SerialIterator(test_set, args.batchsize,
                                                 repeat=False, shuffle=False)

    sum_accuracy = 0
    sum_loss = 0

    while train_iter.epoch < args.epoch:
        batch = train_iter.next()
        x, t = convert.concat_examples(batch, device)
        optimizer.update(model, x, t)
        sum_loss += float(model.loss.array) * len(t)
        sum_accuracy += float(model.accuracy.array) * len(t)

        if train_iter.is_new_epoch:
            epoch = train_iter.epoch
            train_loss = sum_loss / train_count
            train_acc = sum_accuracy / train_count
            _run.log_scalar(f"trial{trial.number}.training.loss", train_loss, epoch)
            _run.log_scalar(f"trial{trial.number}.training.accuracy", train_acc, epoch)

            # evaluation
            sum_accuracy = 0
            sum_loss = 0
            # Enable evaluation mode.
            with configuration.using_config('train', False):
                # This is optional but can reduce computational overhead.
                with chainer.using_config('enable_backprop', False):
                    for batch in test_iter:
                        x, t = convert.concat_examples(batch, device)
                        loss = model(x, t)
                        sum_loss += float(loss.array) * len(t)
                        sum_accuracy += float(
                            model.accuracy.array) * len(t)

            test_iter.reset()
            valid_loss = sum_loss / test_count
            valid_acc = sum_accuracy / test_count
            _run.log_scalar(f"trial{trial.number}.validation.loss", valid_loss, epoch)
            _run.log_scalar(f"trial{trial.number}.validation.accuracy", valid_acc, epoch)
            _log.info(f'epoch {epoch}\ttraining loss:{train_loss:.04f}\ttraining acc:{train_acc:.04f}\t' + 
                f'valid loss:{valid_loss:.04f}\tvalid acc:{valid_acc:.04f}')
            sum_accuracy = 0
            sum_loss = 0
            if trial.should_prune():
                break
    return valid_acc


def objective(trial, cmd_name, config_updates, named_configs, args):
    # rng = np.random.RandomState(0)
    units = trial.suggest_int('unit', 500, 1000)
    config_updates['unit'] = units
    config_updates['trial'] = trial
    # Load the MNIST dataset
    _run = ex.run(cmd_name,
                  config_updates,
                  named_configs,
                  info={},
                  meta_info={},
                  options=args)
    return _run.result


def process_argv(argv):
    argv = ensure_wellformed_argv(argv)
    short_usage, usage, internal_usage = ex.get_usage()
    args = docopt(internal_usage, [str(a) for a in argv[1:]], help=False)

    cmd_name = args.get("COMMAND") or ex.default_command
    config_updates, named_configs = get_config_updates(args["UPDATE"])

    err = ex._check_command(cmd_name)
    if not args["help"] and err:
        print(short_usage)
        print(err)
        sys.exit(1)

    if ex._handle_help(args, usage):
        sys.exit()
    return cmd_name, config_updates, named_configs, args


def main(argv):
    import optuna
    cmd_name, config_updates, named_configs, args = process_argv(argv)
    study = optuna.create_study(direction='maximize', pruner=optuna.pruners.MedianPruner())
    study.optimize(lambda trial: objective(trial, cmd_name, config_updates, 
                                           named_configs, args), n_trials=5)


if __name__ == "__main__":
    main(sys.argv)
    # get_from_commandline(sys.argv)
    # ex.run_commandline()
    sys.exit(0)

without any problem.

2019-11-23

So, I am wondering if there is any plan to support nosql db(mongodb) so it may be formatted for omniboard.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:2
  • Comments:8 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
jakubczakoncommented, Nov 29, 2019

Ok, thanks @toshihikoyanase I will monitor that issue.

1reaction
jakubczakoncommented, Nov 28, 2019

Touching on what @toshihikoyanase said.

I’ve created integration with neptune.ml via callback: https://neptune-contrib.readthedocs.io/user_guide/monitoring/optuna.html

Also, I see you are using sacred and there is a NeptuneObserver that lets you have this dashboard experience without changing anything in your sacred-integrated codebase: https://neptune-contrib.readthedocs.io/examples/observer_sacred.html

I think with those two you can have both your individual runs tracked and the meta-runs (hpo search). I hope this helps @Fhrozen

Read more comments on GitHub >

github_iconTop Results From Across the Web

Decoded Frontend (Dmytro Mezhenskyi) - Twitter
Hey Guys! The new video is out From it you will learn a little bit more about #Angular CLI builders and how they...
Read more >
Frontend Focus Issue 466: November 11, 2020
▷ What Is ARIA Even For? — This is a great explainer (and the first in a series) on understanding semantic HTML and...
Read more >
System Design: API Gateway + Backend for Frontend(BFF) + ...
As most of systems are build with Microservices Architecture i.e decoupled services with specific objectives. Frontend systems needs to ...
Read more >
How Berbix Uses Backend-Driven Frontend to Manage ...
First, imagine if all the frontend SDKs were built in a traditional manner where each contains all the logic to render the whole...
Read more >
Web Workers - Bringing Multithreading to the Front-End
Improve your client-side performance by breaking heavy operations into multiple threads - backend style.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found