question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Any idea of adjusting task contributions in multi-task training

See original GitHub issue

Hi,

I am using Graph to train a multi-task CNN. Intutively, it makes sense to force training to focus on a main task, but I did not think this simple feature has already been supported. I guess I can introduce task_weight into the current Graph model, but please correct me if I am wrong.

class Graph(Model, containers.Graph):
    def compile(self, optimizer, loss, theano_mode=None):
        # loss is a dictionary mapping output name to loss functions
        ys = []
        ys_train = []
        ys_test = []
        weights = []
        train_loss = 0.
        test_loss = 0.
        for output_name in self.output_order:
            loss_fn = loss[output_name]
            output = self.outputs[output_name]
            y_train = output.get_output(True)
            y_test = output.get_output(False)
            y = T.zeros_like(y_test)
            ys.append(y)
            ys_train.append(y_train)
            ys_test.append(y_test)

            if hasattr(output, "get_output_mask"):
                mask = output.get_output_mask()
            else:
                mask = None

            weight = T.ones_like(y_test)
            weights.append(weight)
            weighted_loss = weighted_objective(objectives.get(loss_fn))
            # <-- begin of using task weight --> 
            train_loss += weighted_loss(y, y_train, weight, mask) *  task_weight
            test_loss += weighted_loss(y, y_test, weight, mask) * task_weight
            # <-- end of using task weight -->
        train_loss.name = 'train_loss'
        test_loss.name = 'test_loss'

I’ve also seen papers claiming it is important to stop different tasks at different iterations, but not sure how to support this feature in a systematic way ( sure I can manually stop training a model, remove the task I want to stop, and reload weights from previous trained model and lanch training on the rest of tasks again).

Issue Analytics

  • State:closed
  • Created 8 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
billzhengCcommented, Sep 3, 2015

I guess a parameter like “weighted_loss” could be added to graph.compile, which makes it easier to adjust contributions.

0reactions
fcholletcommented, Sep 3, 2015

@rex-yue-wu then just create a wrapper function f x: 0.2 * mse(x)…

Read more comments on GitHub >

github_iconTop Results From Across the Web

Which Tasks Should Be Learned Together in Multi-task ... - arXiv
This paper has two main contributions. First, in Section 4, we provide an empirical study of a number of factors that in- fluence...
Read more >
Deciding Which Tasks Should Train Together in Multi-Task ...
One direct approach to select the subset of tasks on which a model should train is to perform an exhaustive search over all...
Read more >
Self-paced Multitask Learning — A Review
The basic idea is to monitor the learning progress signal and design or learn a policy to adjust the relative weights to the...
Read more >
Training conquers multitasking costs by dividing task ... - PNAS
Although successful negotiation of the rich sensory world clearly requires the ongoing management of multiple tasks, humans show substantial multitasking ...
Read more >
Dynamic Task Prioritization for Multitask Learning
This allows a model to dynamically prioritize difficult tasks during training, where difficulty is inversely proportional to performance, and where difficulty ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found