how to make optuna multi-objective study work (with allennlp)?
See original GitHub issueHello,
I followed an allennlp example in this repo here and it worked just fine. But when I tested optuna.multi_objective.create_study
I had this error:
[W 2021-04-12 21:21:58,411] Trial 0 failed because of the following error: TypeError(“object of type ‘float’ has no len()”) Traceback (most recent call last): File “/xx/lib/python3.7/site-packages/optuna/_optimize.py”, line 217, in _run_trial value_or_values = func(trial) File “/xx/lib/python3.7/site-packages/optuna/multi_objective/study.py”, line 317, in mo_objective values = objective(mo_trial) File “objective.py”, line 165, in objective metrics = trainer.train() File “/xx/allennlp2.0/allennlp/allennlp/training/trainer.py”, line 930, in train metrics, epoch = self._try_train() File “/xx/allennlp2.0/allennlp/allennlp/training/trainer.py”, line 1052, in _try_train callback.on_epoch(self, metrics=metrics, epoch=epoch, is_primary=self._primary) File “/xx/lib/python3.7/site-packages/optuna/integration/allennlp.py”, line 496, in on_epoch self._trial.report(float(value), epoch) File “/xx/lib/python3.7/site-packages/optuna/multi_objective/trial.py”, line 133, in report if len(values) != self._n_objectives: TypeError: object of type ‘float’ has no len()
It seems to me that the problem comes from the validation_metric
in trainer
:
TARGET_METRIC = ["accuracy_doc", "accuracy_block", "accuracy_turn"]
trainer = GradientDescentTrainer(
model=model,
optimizer=optimizer,
data_loader=train_data_loader,
validation_data_loader=validation_data_loader,
validation_metric=["+"+str(m) for m in TARGET_METRIC], #metrics will be summed to make the is_best decision
patience=PATIENCE, # `patience=None` since it could conflict with AllenNLPPruningCallback
num_epochs=EPOCHS,
cuda_device=CUDA_DEVICE,
serialization_dir=serialization_dir,
callbacks=[AllenNLPPruningCallback(trial, "validation_accuracy_doc")],
)
In the doc of allennlp, validation_metric
could be a string or a list of string. If it’s a list then they will be summed up. I am wondering if that’s why I got only one float value passed to values
here:
File “/xx/lib/python3.7/site-packages/optuna/multi_objective/trial.py”, line 133, in report if len(values) != self._n_objectives: TypeError: object of type ‘float’ has no len()
Then I searched in allennlp for a trainer that doesn’t do the summing for validation_metrics
but in vain.
Maybe I wasn’t in the right path.
Do you have any idea how to solve it? Thank you in advance!
Issue Analytics
- State:
- Created 2 years ago
- Comments:11 (6 by maintainers)
@chuyuanli
Here is an example on multi-objective from the revised optuna example, no optuna pruner and some revisions on data size, epoch, trials and others. Two metrics are used maximize accuracy and minimize loss. patience is None as we have two validation metrics which are in opposite direction. At the end we try to determine which trial has the best accuracy (max) and the best loss (min).
Result
Since the accuracy is the same for all trials, we can say that the best trial is 1 because it has the best loss or minimum loss.
[note] You may able to run multi-objective optimization if you remove
callbacks
from your trainer.Apart from a pruner,
AllenNLPExecutor
could be extended for multi-objective optimization. I think we can support multi-objective optimization by makingAllenNLPExecutor.run()
return a list of target metrics. If you are interested in extendingAllenNLPExecutor
, I’d love to review your PR!