question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Loading FlaxHybridCLIP trained model

See original GitHub issue

Environment info

  • transformers version: 4.9.0.dev0
  • Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
  • Python version: 3.8.10
  • PyTorch version (GPU?): 1.9.0+cu102 (False)
  • Tensorflow version (GPU?): 2.5.0 (False)
  • Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
  • Jax version: 0.2.16
  • JaxLib version: 0.1.68
  • Using GPU in script?: N/A
  • Using distributed or parallel set-up in script?: N/A

Models:

  • FlaxHybridCLIP

Information

I am not sure about how to load a trained FlaxHybridCLIP model from a folder. We trained using this.

I tried: FlaxHybridCLIP.from_text_vision_pretrained(PATH_TRAINED_MODEL, PATH_TRAINED_MODEL), but I got the following error:

  File "<stdin>", line 1, in <module>
  File "/home/vinid/transformers/examples/research_projects/jax-projects/hybrid_clip/modeling_hybrid_clip.py", line 333, in from_text_vision_pretrained
    text_config = AutoConfig.from_pretrained(text_model_name_or_path)
  File "/home/raphaelp/transformers/src/transformers/models/auto/configuration_auto.py", line 452, in from_pretrained
    config_class = CONFIG_MAPPING[config_dict["model_type"]]
KeyError: 'hybrid-clip'

The folder (PATH_TRAINED_MODEL) contains the two following files:

  • config.json
  • flax_model.msgpack

Thank you 😃 😃

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:8 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
4rtemi5commented, Jul 7, 2021

I think this works. Or at least it doesn’t throw an error:

with open(path_to_config, 'r') as f:
    config_dict = json.load(f)
config_dict['vision_config']['model_type'] = 'clip'
config = HybridCLIPConfig(text_config_dict=config_dict['text_config'], vision_config_dict=config_dict['vision_config'])
model = FlaxHybridCLIP.from_pretrained(path_to_msgpack, config=config)
0reactions
patil-surajcommented, Sep 23, 2021

Hi @timothybrooks

Could you open a new issue for FlaxEncoderDecoderModel and post the stack trace and code to reproduce there? Thanks!

Read more comments on GitHub >

github_iconTop Results From Across the Web

src/hybrid_clip/README.md · flax-community/medclip-roco at ...
Here is an example of how to load the model using pre-trained text and vision models. from modeling_hybrid_clip import FlaxHybridCLIP model ...
Read more >
Saving and Loading Models (Coding TensorFlow) - YouTube
Training models can take a very long time, and you definitely don't want to have to retrain everything over a single mishap.
Read more >
medclip JAX Model - Model Zoo
You can load the pretrained model from the Hugging Face Hub with from medclip.modeling_hybrid_clip import FlaxHybridCLIP model ...
Read more >
Simple Implementation of OpenAI CLIP model: A Tutorial
To mention just one, CLIP model trained with this strategy classifies ImageNet ... this tokenizer will be loaded when running the model.
Read more >
zero shot heat map.ipynb - Colaboratory - Google Colab
Some weights of FlaxHybridCLIP were not initialized from the model ... are newly initialized: {('logit_scale',)} You should probably TRAIN this model on a ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found