question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Feature: `push_to_hub` function for metrics.

See original GitHub issue

Add a function to easily push the result of a metric to the hub similar to the Trainer in transformers.

Use-case: a user evaluates a model on a dataset. If this happens with a transformer model and they use the Trainer they can easily do that during training. However, if they are either not using a transformer model, the Trainer or want to do it after training they have to do it manually. Thus it would be nice to easily push the results of a model and framework agnostic model to the hub to add it to a models dataset card.

The workflow would be roughly the following:

metric = load_metric("lvwerra/my_metric")
metric.add(some_predictions, some_references)
metric.compute()
metric.push_to_hub(name="my_new_metric", model="lvwerra/my_model", dataset="lvwerra/my_dataset")

This adds the result of the metric to the meta information of the README.md so it is displayed in the model card.

cc @osanseviero @lhoestq

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

2reactions
lvwerracommented, Apr 7, 2022

Same, maybe we can add the second option for more “power users” and use it as well under the hood of evaluate.push_to_hub. What do you think?

1reaction
osansevierocommented, Apr 7, 2022

I like the evaluate.push_to_hub(metric, to="lvwerra/my_model") path better.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Creating and sharing a new evaluation - Hugging Face
Create. All evaluation modules, be it metrics, comparisons, or measurements live on the Hub in a Space (see for example Accuracy).
Read more >
FineTuned Model ( Compute Metrics & Push to Hub ) · Issue #24
Is there anyway to get the compute metrics using simpleT5 ? also what about the possibility to push the model to the huggingface...
Read more >
The Push to Hub API (PyTorch) - YouTube
Easily share your fine-tuned models on the Hugging Face Hub using the push to hub API.This video is part of the Hugging Face...
Read more >
How to use Huggingface Datasets? A Guide to Features like ...
A Guide to Features like Streaming, Metrics, Map, Concatenate,… ... libraries' built-in dataset loading functionality, but Huggingface did a top-notch job.
Read more >
Document AI: Fine-tuning LayoutLM for document ... - philschmid
We can display all our classes by inspecting the features of our dataset ... import evaluate import numpy as np # load seqeval...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found