Add support for open_clip
See original GitHub issueFeature request
Add open_clip
(https://github.com/mlfoundations/open_clip) support to Transformers
Motivation
open_clip has released ViT-B-32, ViT-B/16, ViT-B/16+, ViT-L/14 trained on LAION-400M and LAION-2B which are very relevant models - matching and sometimes surpassing OAI models benchmarks - but are not yet compatible with Transformers.
Also, soon a ViT-H is going to drop which will be the SOTA open source CLIP (since OAI never open sourced their ViT-H used to train DALL-E 2) - so it will also make it even more relevant to support OpenCLIP models and code
cc @rwightman
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:14 (10 by maintainers)
Top Results From Across the Web
An open source implementation of CLIP. - GitHub
In the future, we plan to add support for TPU training and release larger models. We hope this codebase facilitates and promotes further...
Read more >Simple Implementation of OpenAI CLIP model: A Tutorial
A tutorial on simple implementation of CLIP model from OpenAI in PyTorch. There are a lot of in-depth explanation which makes understanding ...
Read more >CLIP - Hugging Face
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the...
Read more >OpenCLIP download | SourceForge.net
Download OpenCLIP for free. An open source implementation of CLIP. The goal of this repository is to enable training models with contrastive ...
Read more >Exploring OpenCLIP – Weights & Biases - WandB
OpenCLIP & LAION: An Interview with Romain Beaumont Using OpenCLIP for ... if you're interested, feel free to join and propose your help....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Agreed with @LysandreJik . The change of activation function does not require a new model by itself (since you can set the right one in the config) but if you anticipate other modeling tweaks, a new architecture definitely makes sense.
Hey @rwightman, excited to hear you’d like to contribute Open CLIP to
transformers
!The implementation of
CLIP
is done using theACT2FN
activation function dictionary: https://github.com/huggingface/transformers/blob/a26c752353a127ba8e4728413806f545718a8d78/src/transformers/models/clip/modeling_clip.py#L281If this is the only change necessary, then it should be loadable directly in the existing architecture by specifying the appopriate
hidden_act
configuration option.Do you have in mind what other changes might be needed down the road for the support of additional checkpoints? I would be personally be open to having an
OpenCLIP
model archigtecture which could be the host for current checkpoints and upcoming checkpoints, even while unaware of the changes that might need to be done in the future (therefore with modeling code that would be a bit more dynamic than others), but I’m pinging @patrickvonplaten and @sgugger for their opinion as well.