Feature: `push_to_hub` function for metrics.
See original GitHub issueAdd a function to easily push the result of a metric to the hub similar to the Trainer
in transformers
.
Use-case: a user evaluates a model on a dataset. If this happens with a transformer
model and they use the Trainer
they can easily do that during training. However, if they are either not using a transformer
model, the Trainer
or want to do it after training they have to do it manually. Thus it would be nice to easily push the results of a model and framework agnostic model to the hub to add it to a models dataset card.
The workflow would be roughly the following:
metric = load_metric("lvwerra/my_metric")
metric.add(some_predictions, some_references)
metric.compute()
metric.push_to_hub(name="my_new_metric", model="lvwerra/my_model", dataset="lvwerra/my_dataset")
This adds the result of the metric to the meta information of the README.md
so it is displayed in the model card.
Issue Analytics
- State:
- Created a year ago
- Comments:5 (5 by maintainers)
Top Results From Across the Web
Creating and sharing a new evaluation - Hugging Face
Create. All evaluation modules, be it metrics, comparisons, or measurements live on the Hub in a Space (see for example Accuracy).
Read more >FineTuned Model ( Compute Metrics & Push to Hub ) · Issue #24
Is there anyway to get the compute metrics using simpleT5 ? also what about the possibility to push the model to the huggingface...
Read more >The Push to Hub API (PyTorch) - YouTube
Easily share your fine-tuned models on the Hugging Face Hub using the push to hub API.This video is part of the Hugging Face...
Read more >How to use Huggingface Datasets? A Guide to Features like ...
A Guide to Features like Streaming, Metrics, Map, Concatenate,… ... libraries' built-in dataset loading functionality, but Huggingface did a top-notch job.
Read more >Document AI: Fine-tuning LayoutLM for document ... - philschmid
We can display all our classes by inspecting the features of our dataset ... import evaluate import numpy as np # load seqeval...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Same, maybe we can add the second option for more “power users” and use it as well under the hood of
evaluate.push_to_hub
. What do you think?I like the
evaluate.push_to_hub(metric, to="lvwerra/my_model")
path better.