question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

push_to_hub_keras should support multiple models

See original GitHub issue

It’s a minor UX improvement for push_to_hub_keras.

Some applications like RL or non-transformers encoder-decoder networks (CNN encoder RNN decoder) have multiple models instead of one model.

We could pass a list to model, get the model names and save those models individually under different folders and push. (check if list or a model with isinstance first)

An alternative is git pull everytime we push with push_to_hub_keras (which is cumbersome in a notebook) and call push_to_hub_keras repeatedly for every model.

cc: @osanseviero

Issue Analytics

  • State:open
  • Created 2 years ago
  • Reactions:1
  • Comments:7 (7 by maintainers)

github_iconTop GitHub Comments

2reactions
osansevierocommented, Mar 3, 2022

I feel only the following sections are needed:

  1. Model desc (for full thing)
  2. Intended uses and limitations
  3. Training and eval data
  4. Train procedure for model A
  5. Model Plot for model A
  6. Training procedure for model B
  7. Model Plot for model B

Maybe before 4, we can have a section saying “this repository consists of X models”. Then before 4 and before 6, have a heading with “Further information for model A|B”. WDYT? We could do this by breaking the readme function a bit I feel.

1reaction
naterawcommented, Mar 3, 2022

Sure, I like this idea 😄 . Would be very helpful especially in the case of GANs. I’d really be curious to know if other folks want this feature.

As for model card, I think we can either do what @osanseviero is describing, or just leave it as-is. I’m guessing this would only be used when models are trained together. People probably shouldn’t be dumping unrelated models in the same repo and we don’t want to encourage that.

Read more comments on GitHub >

github_iconTop Results From Across the Web

[2008.11813] The use of multiple models within an organisation
Typically, in a complex organisation, multiple related models will often be used in support of a decision. For example, engineering models ...
Read more >
Serve multiple models with Amazon SageMaker and Triton ...
In this post, we discuss how SageMaker and NVIDIA Triton Inference Server can solve this problem. Solution overview. Let's look at how SageMaker ......
Read more >
Multiple Models in Single View in MVC - C# Corner
The .NET framework supports tuples up to seven elements. Using this tuple object we can pass multiple models from the controller to the...
Read more >
Developing and Using Multiple Models to Promote Scientific ...
So far, we have discussed how developing and using multiple models exemplifies scientists' work in the discipline and can therefore help ...
Read more >
Multiple Models in NUX Management Reporting?
... of having a report that could have multiple Anaplan source models. ... if it's not there already - and put the link...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found