[Flax] from_pretrained does not consider the passed dtype
See original GitHub issueEnvironment info
When loading a flax model with from_pretrained
the type argument is not used. The weights are initialized with the dtype of saved weights.
So if you do
model = FlaxGPT2ForCausalLM.from_pretrained("gpt2", dtype=jnp.dtype("bfloat16"))
# check the dtype of one of the params
model.params["transformer"]["wpe"]["embedding"].dtype
=> dtype("float32")
We should probably cast the weights to self.dtype
.
As a workaround for bf16
, one could manually cast the weighs with
def to_bf16(t):
return jax.tree_map(lambda x: x.astype(jnp.bfloat16) if x.dtype == jnp.float32 else x, t)
model.params = to_bf16(model.params)
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
Models - Hugging Face
If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that ......
Read more >novelai-tpu - Kaggle
assert "TPU" in device_type, "Available device is not a TPU, ... dtype = uncond_embeddings.dtype) # assign weights to the prompts and ...
Read more >DALL-E Mini: Powerful image generation in a tiny model
Unlike TensorFlow, it is not an official product, and thus has become very ... Next, the output of the BART encoder and the...
Read more >python/huggingface/transformers/src/transformers/modeling_utils.py ...
So if a non-float `dtype` is passed this functions will throw an exception. ... defaults to `False`): Load the model weights from a...
Read more >pytorch summary fails with huggingface model II: Expected all ...
A working solution (or workaround?) is kind of obvious: summary(model, input_size=(16, 512), dtypes=['torch.IntTensor'], device='cpu').
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I think it’s fine to manually port weights to bfloat16 if you want to. In general all Flax layers accept a dtype attribute when it’s safe to do intermediate computation in bloat16 and you can set dtype=bfloat16 for those layers. Keeping parameters as bfloat16 should only be necessary if the model is huge and the parameters can’t fit on device memory, from what I know. I think it’s tricky to get that right and requires careful attention to which parameters are safe to keep in bfloat16, but I don’t have too much personal context here. I can ask others if that helps.
So I’m first curious whether indeed it’s necessary to keep parameters as bfloat16 in this case, and if so, why
will soon be taken care of by @patil-suraj 😃