question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

`"transformers_version"` is not enforced

See original GitHub issue

System Info

  • transformers version: 4.21.1
  • Platform: macOS-10.16-x86_64-i386-64bit
  • Python version: 3.9.7
  • Huggingface_hub version: 0.8.1
  • PyTorch version (GPU?): 1.9.1 (False)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: Yes+no
  • Using distributed or parallel set-up in script?: no

Who can help?

This is a general problem with loading pretrained models in the library, not any specific model: @sgugger @stevhliu @patrickvonplaten

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, …)
  • My own task or dataset (give details below)

Reproduction

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("NinedayWang/PolyCoder-2.7B")
model = AutoModelForCausalLM.from_pretrained("NinedayWang/PolyCoder-2.7B")

Expected behavior

The model’s config.json specifies that this model requires version 4.23.1 here: https://huggingface.co/NinedayWang/PolyCoder-2.7B/blob/main/config.json#L21

but if the user is trying to load this model with an older version (4.21.1), it still incorrectly and silently does load.

So, I would expect an error instead of a successful loading, because with an old version of transformers, this model produces incorrect predictions. I prefer not being able to load the checkpoint if I don’t have the required version of the library.

In other words, why do we have the "transformers_version" field in the config.json, if it is not enforced? Thanks!

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
sguggercommented, Oct 21, 2022

Ah, yet another argument for not adding config arguments like those. Thanks for pointing that out.

0reactions
urialoncommented, Oct 21, 2022

Thanks a lot!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Unable to install transformers & its related dependencies ...
Environment info transformers version: NA Platform: Windows 10 (64 bit) Python version: 3.6 / 3.10 PyTorch version (GPU?)
Read more >
Installation
Install Transformers for whichever deep learning library you're working with, setup your cache, and optionally configure Transformers to run offline.
Read more >
pytorch-transformers
Latest version. Released: Sep 4, 2019. Repository of pre-trained NLP Transformer models: BERT & RoBERTa, GPT & GPT-2, Transformer-XL, XLNet and XLM ...
Read more >
python - Import of transformers package throwing value_error
from importlib_metadata import version print(version('tqdm')) #4.64.0 ... Not : if importlib_metadata is not working, ...
Read more >
import the latest transformers
1: Successfully uninstalled transformers-3.5.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found