Fail to convert the Funnel Transformer tensorflow version to transformer one when use the official script
See original GitHub issueEnvironment info
transformersversion:3.5.1- Platform:Centos
- Python version:3.7
- PyTorch version (GPU?):1.6.0
- Tensorflow version (GPU?):2.3.2
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:yes
Information
Model I am using (Bert, XLNet …):Funnel Transformer
To reproduce
Steps to reproduce the behavior:
1.use the script convert_funnel_original_tf_checkpoint_to_pytorch.py@sgugger @LysandreJik
raise error
Traceback (most recent call last):
File "run_pretraining.py", line 158, in <module>
convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.config_file, args.pytorch_dump_path)
File "run_pretraining.py", line 40, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_funnel(model, config, tf_checkpoint_path)
File "run_pretraining.py", line 122, in load_tf_weights_in_funnel
pointer = getattr(pointer, _layer_map[m_name])
File "/root/miniconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 772, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'FunnelForPreTraining' object has no attribute 'embeddings'
Expected behavior
Issue Analytics
- State:
- Created 3 years ago
- Comments:15 (7 by maintainers)
Top Results From Across the Web
How to add a model to Transformers? - Hugging Face
Created script that successfully runs forward pass using original repository and checkpoint. ☐ Successfully added the model skeleton to Transformers.
Read more >transformers · PyPI
This tutorial explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our Trainer...
Read more >Upgrading your code to TensorFlow 2.0
The upgrade script can be run on a single Python file: tf_upgrade_v2 — infile foo.py — outfile foo-upgraded.py. TF Upgrade Script
Read more >Funnel Vision Transformer for image classification - CS231n
Table 1. I use the ViT implementation provided by TensorFlow official models 1. On top of it, I implemented the Funnel. Transformer architecture...
Read more >arXiv:2104.08253v2 [cs.CL] 20 Sep 2021
Transformer encoder (Vaswani et al., 2017) LMs ... encode input text sequence into a single vector rep- ... Funnel Transformer was.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

I got the same problem as you and I manage to convert the checkpoint by using the config file at the hugging face model hub. If you use 6-6-6 block use this one https://huggingface.co/funnel-transformer/intermediate/raw/main/config.json and change vocab size.
@RyanHuangNLP I have asked you before to give us the command your launch, the environment you use and a the content of the config file you are using. There is no point tagging me further on this issue with a vague message if you are not willing to share for those information as I cannot investigate a bug I cannot reproduce. As I also said before and @NLP33 indicated, the script only supports config files corresponding to a config created by using
FunnelConfigfrom transformers. It does not support the original config files from the original repo.