question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Run_qa crashes because of parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))

See original GitHub issue

Environment info

  • transformers version: 4.3.3
  • Platform: linux
  • Python version:3.7, 3.8, 3.9 reproed across all three
  • PyTorch version (GPU?): 1.7, tried 1.8 with same behavior
  • Tensorflow version (GPU?):N/A
  • Using GPU in script?: yes
  • Using distributed or parallel set-up in script?: Yes 2 gpu

Who can help

@sgugger, @patil-suraj

Information

Model I am using (Bert, XLNet …): bert-base-uncased

The problem arises when using:

  • [ X] the official example scripts: (give details below)
  • my own modified scripts: (give details below)

The tasks I am working on is:

  • [ X] an official GLUE/SQUaD task: (give the name)
  • my own task or dataset: (give details below) SQUAD 1.0

To reproduce

Steps to reproduce the behavior:

  1. Install clean transformers environment
  2. run the run_qa.py script with instructions as specified
  3. crash If you go ahead and create a new environment and install the most recent version of the transformer and try to run the run_qa.py script(SQUAD) it crashes because of a parser issue.

python run_qa.py --model_name_or_path bert-base-uncased --dataset_name squad --do_train --per_device_train_batch_size 8 --learning_rate 3e-5 --max_seq_length 384 --doc_stride 128 --output_dir output --overwrite_output_dir --cache_dir cache --preprocessing_num_workers 4 --seed 42 --num_train_epochs 1 Traceback (most recent call last): File “run_qa.py”, line 1095, in <module> main() File “run_qa.py”, line 902, in main parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) File “/home/spacemanidol/miniconda3/envs/sparseml/lib/python3.7/site-packages/transformers/hf_argparser.py”, line 52, in init self._add_dataclass_arguments(dtype) File “/home/spacemanidol/miniconda3/envs/sparseml/lib/python3.7/site-packages/transformers/hf_argparser.py”, line 93, in _add_dataclass_arguments elif hasattr(field.type, “origin”) and issubclass(field.type.origin, List): File “/home/spacemanidol/miniconda3/envs/sparseml/lib/python3.7/typing.py”, line 721, in subclasscheck return issubclass(cls, self.origin) TypeError: issubclass() arg 1 must be a clas

Expected behavior

Run and produce a BERT-QA model

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
spacemanidolcommented, Mar 12, 2021

Can confirm this works.

1reaction
stas00commented, Mar 9, 2021

no, that was not that error. I tested run_qa.py w/ dataclasses on py38 and it didn’t fail.

the datasets error was: AttributeError: module 'typing' has no attribute '_ClassVar'

https://github.com/huggingface/transformers/issues/8638

Read more comments on GitHub >

github_iconTop Results From Across the Web

Trainer - Hugging Face
Trainer. The Trainer class provides an API for feature-complete training in PyTorch for most standard use cases. It's used in most of the...
Read more >
Bert文本分类代码 - 知乎专栏
Using `HfArgumentParser` we can turn this class into argparse arguments to be able to specify them on the command line.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found