Any idea of adjusting task contributions in multi-task training
See original GitHub issueHi,
I am using Graph to train a multi-task CNN. Intutively, it makes sense to force training to focus on a main task, but I did not think this simple feature has already been supported. I guess I can introduce task_weight into the current Graph model, but please correct me if I am wrong.
class Graph(Model, containers.Graph):
def compile(self, optimizer, loss, theano_mode=None):
# loss is a dictionary mapping output name to loss functions
ys = []
ys_train = []
ys_test = []
weights = []
train_loss = 0.
test_loss = 0.
for output_name in self.output_order:
loss_fn = loss[output_name]
output = self.outputs[output_name]
y_train = output.get_output(True)
y_test = output.get_output(False)
y = T.zeros_like(y_test)
ys.append(y)
ys_train.append(y_train)
ys_test.append(y_test)
if hasattr(output, "get_output_mask"):
mask = output.get_output_mask()
else:
mask = None
weight = T.ones_like(y_test)
weights.append(weight)
weighted_loss = weighted_objective(objectives.get(loss_fn))
# <-- begin of using task weight -->
train_loss += weighted_loss(y, y_train, weight, mask) * task_weight
test_loss += weighted_loss(y, y_test, weight, mask) * task_weight
# <-- end of using task weight -->
train_loss.name = 'train_loss'
test_loss.name = 'test_loss'
I’ve also seen papers claiming it is important to stop different tasks at different iterations, but not sure how to support this feature in a systematic way ( sure I can manually stop training a model, remove the task I want to stop, and reload weights from previous trained model and lanch training on the rest of tasks again).
Issue Analytics
- State:
- Created 8 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Which Tasks Should Be Learned Together in Multi-task ... - arXiv
This paper has two main contributions. First, in Section 4, we provide an empirical study of a number of factors that in- fluence...
Read more >Deciding Which Tasks Should Train Together in Multi-Task ...
One direct approach to select the subset of tasks on which a model should train is to perform an exhaustive search over all...
Read more >Self-paced Multitask Learning — A Review
The basic idea is to monitor the learning progress signal and design or learn a policy to adjust the relative weights to the...
Read more >Training conquers multitasking costs by dividing task ... - PNAS
Although successful negotiation of the rich sensory world clearly requires the ongoing management of multiple tasks, humans show substantial multitasking ...
Read more >Dynamic Task Prioritization for Multitask Learning
This allows a model to dynamically prioritize difficult tasks during training, where difficulty is inversely proportional to performance, and where difficulty ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I guess a parameter like “weighted_loss” could be added to graph.compile, which makes it easier to adjust contributions.
@rex-yue-wu then just create a wrapper function f x: 0.2 * mse(x)…