Loading FlaxHybridCLIP trained model
See original GitHub issueEnvironment info
transformers
version: 4.9.0.dev0- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
Models:
- FlaxHybridCLIP
Information
I am not sure about how to load a trained FlaxHybridCLIP model from a folder. We trained using this.
I tried: FlaxHybridCLIP.from_text_vision_pretrained(PATH_TRAINED_MODEL, PATH_TRAINED_MODEL)
, but I got the following error:
File "<stdin>", line 1, in <module>
File "/home/vinid/transformers/examples/research_projects/jax-projects/hybrid_clip/modeling_hybrid_clip.py", line 333, in from_text_vision_pretrained
text_config = AutoConfig.from_pretrained(text_model_name_or_path)
File "/home/raphaelp/transformers/src/transformers/models/auto/configuration_auto.py", line 452, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
KeyError: 'hybrid-clip'
The folder (PATH_TRAINED_MODEL) contains the two following files:
config.json
flax_model.msgpack
Thank you 😃 😃
Issue Analytics
- State:
- Created 2 years ago
- Comments:8 (4 by maintainers)
Top Results From Across the Web
src/hybrid_clip/README.md · flax-community/medclip-roco at ...
Here is an example of how to load the model using pre-trained text and vision models. from modeling_hybrid_clip import FlaxHybridCLIP model ...
Read more >Saving and Loading Models (Coding TensorFlow) - YouTube
Training models can take a very long time, and you definitely don't want to have to retrain everything over a single mishap.
Read more >medclip JAX Model - Model Zoo
You can load the pretrained model from the Hugging Face Hub with from medclip.modeling_hybrid_clip import FlaxHybridCLIP model ...
Read more >Simple Implementation of OpenAI CLIP model: A Tutorial
To mention just one, CLIP model trained with this strategy classifies ImageNet ... this tokenizer will be loaded when running the model.
Read more >zero shot heat map.ipynb - Colaboratory - Google Colab
Some weights of FlaxHybridCLIP were not initialized from the model ... are newly initialized: {('logit_scale',)} You should probably TRAIN this model on a ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I think this works. Or at least it doesn’t throw an error:
Hi @timothybrooks
Could you open a new issue for
FlaxEncoderDecoderModel
and post the stack trace and code to reproduce there? Thanks!