TensorBoardLogger should be able to add metric names in hparams
See original GitHub issue🚀 Feature
TensorBoard allows investigating the effect of hyperparameters in the hparams tab. Unfortunately, the log_hyperparams
function in TensorBoardLogger
cannot add any information about which of the logged metrics is actually a “metric” which can be used for such a comparison.
Motivation
I would like to use the built-in hparams module of TensorBoard to evaluate my trainings.
Pitch
PyTorch-Lightning should give me the possibility to define the metrics of my model in some way such that any logger is able to derive which metric may be used for hyperparameter validation, as well as other possible characteristics which may be defined for those.
Additional context
The hparams
method of a summary takes the following parameters:
def hparams(hparam_dict=None, metric_dict=None):
metric_dict
is basically a dictionary mapping metric names to values, whereas the values are omitted in the function itself.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:18
- Comments:9 (7 by maintainers)
I think if Lightning offers such a logger mechanism, it should offer an abstraction to enable this functionality. I’d be fine with having a
register_metric
function inTensorBoardLogger
, but I don’t want to rely on implementation details of the underlying logging mechanism.This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.