question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Cannot load studio-ousia/luke-base for AutoModelForTokenClassification

See original GitHub issue

Environment info

  • transformers version: 4.6.0.dev0 (pulled from repo)
  • Platform: 3
  • Python version: 3.7.10
  • PyTorch version (GPU?): 1.7.0 (no)
  • Tensorflow version (GPU?): 2.4.1 (no)
  • Using GPU in script?: No
  • Using distributed or parallel set-up in script?: No

Who can help

Information

I tried loading LUKE’s weight for AutoModelForTokenClassification. I intend to train further for NER. It failed due to a configuration error.

To reproduce

Steps to reproduce the behavior:

from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer

from transformers import AutoTokenizer, AutoModel
  
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-base") #Succesful

model = AutoModel.from_pretrained("studio-ousia/luke-base") #Succesful

model = AutoModelForTokenClassification.from_pretrained("studio-ousia/luke-base", num_labels=39)
Some weights of the model checkpoint at studio-ousia/luke-base were not used when initializing LukeModel: ['embeddings.position_ids']
- This IS expected if you are initializing LukeModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LukeModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-9-b4efbc1b7796> in <module>
      7 model = AutoModel.from_pretrained("studio-ousia/luke-base")
      8 
----> 9 model = AutoModelForTokenClassification.from_pretrained("studio-ousia/luke-base", num_labels=39)

/opt/conda/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
    381             return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
    382         raise ValueError(
--> 383             f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
    384             f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
    385         )

ValueError: Unrecognized configuration class <class 'transformers.models.luke.configuration_luke.LukeConfig'> for this kind of AutoModel: AutoModelForTokenClassification.
Model type should be one of BigBirdConfig, ConvBertConfig, LayoutLMConfig, DistilBertConfig, CamembertConfig, FlaubertConfig, XLMConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MegatronBertConfig, MobileBertConfig, XLNetConfig, AlbertConfig, ElectraConfig, FunnelConfig, MPNetConfig, DebertaConfig, DebertaV2Config, IBertConfig.

Expected behavior

Succesful loading

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
patil-surajcommented, May 10, 2021
0reactions
ghostcommented, Aug 23, 2021

Hi, @Sreyan88 Were you able to train LUKE on a custom dataset? I am also working on the same and do not have any progress on this yet. Any help is appreciated. Thanks!

Read more comments on GitHub >

github_iconTop Results From Across the Web

can't load the model · Issue #353 · huggingface/transformers
I noticed that this error happens when you exceed the disk space in the temporary directory while downloading BERT. 19
Read more >
Auto Classes - Hugging Face
We're on a journey to advance and democratize artificial intelligence through open source and open science.
Read more >
Transformers model from Hugging-Face throws error that ...
Hi after running this code below, I get the following error. ValueError: Could not load model facebook/ ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found