[FR] Use normal dict instead of `**model_kwargs` in `_from_pretrained`
See original GitHub issueThe _from_pretrained
function of the modeling
module has an argument called **model_kwargs
. See here:
IMO this should be changed. The problem I see is that it “collects” all keyword arguments from the function that are not defined. Later it only handles those assigned to head_params
. This then swallows all other keyword args that
might have happened because of a mistype for example. This would then be a bug very hard to find.
My suggestion is to just expect a dict called logistic_regression_kwargs
head_kwargs
(so just a dict and no **
).
This can then just be passed like so:
clf = LogisticRegression(**head_kwargs)
and
model_head = SetFitHead(**head_kwargs)
What do you think?
Issue Analytics
- State:
- Created 10 months ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
Models
PreTrainedModel takes care of storing the configuration of the models and handles methods for loading, downloading and saving models as well as a...
Read more >converting ordered dict in python to normal dict and extract ...
I am struggling to extract out the values for keys -- name, admin1, admin2 and cc. essentially I want to convert the lat,...
Read more >Python Dict
The dict here is based on the "hash table" data structure. You will learn how to build and analyze a hash table in...
Read more >Using dictionaries to store data as key-value pairs
The dictionary stores objects as key-value pairs and can be used to represent complex real-world data.
Read more >Dictionaries in Python
Python provides another composite data type called a dictionary, which is ... on tuples that one rationale for using a tuple instead of...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi @PhilipMay,
I remember @lewtun asked a related problem in here in #149. So I think maybe exposed arguments will be preferred in the future.
But if we put that aside a bit, would it be more general if we use
head_params
instead oflogistic_regression_kwargs
? Since it’s for two different versions (sklearn
andpytorch
) of heads.Ahh I see. Very nice to have a 2nd maintainer. 😃 Many thanks.