question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

TypeError: ElectraForLanguageModelingModel object argument after ** must be a mapping, not Tensor

See original GitHub issue

Describe the bug When I train Electra model I’ve got an error “TypeError: ElectraForLanguageModelingModel object argument after ** must be a mapping, not Tensor”

To Reproduce

from simpletransformers.language_modeling import LanguageModelingModel
import logging


logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)

train_args = {
    "reprocess_input_data": True,
    "overwrite_output_dir": True,
    "vocab_size": 5000,
    "train_batch_size": 16,
    "eval_batch_size": 16,
}

model = LanguageModelingModel('electra', None, args=train_args, train_files="/content/drive/My Drive/dataset_contract/train.txt")

# Mixing standard ELECTRA architectures example
# model = LanguageModelingModel(
#     "electra",
#     None,
#     generator_name="google/electra-small-generator",
#     discriminator_name="google/electra-large-discriminator",
#     args=train_args,
#     train_files="wikitext-2/wiki.train.tokens",
# )

model.train_model("/content/drive/My Drive/dataset_contract/train.txt", eval_file="/content/drive/My Drive/dataset_contract/test.txt")

model.eval_model("/content/drive/My Drive/dataset_contract/test.txt")

Error:


wandb: WARNING W&B installed but not logged in.  Run `wandb login` or set the WANDB_API_KEY env variable.
INFO:simpletransformers.language_modeling.language_modeling_model: Training of None tokenizer complete. Saved to outputs/.
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py:1321: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead.
  FutureWarning,
INFO:simpletransformers.language_modeling.language_modeling_model: Training language model from scratch
INFO:simpletransformers.language_modeling.language_modeling_utils: Creating features from dataset file at cache_dir/
100%
11961/11961 [01:47<00:00, 111.20it/s]

100%
144978/144978 [00:02<00:00, 64320.68it/s]
INFO:simpletransformers.language_modeling.language_modeling_utils: Saving features into cached file cache_dir/electra_cached_lm_126_train.txt

INFO:simpletransformers.language_modeling.language_modeling_model: Training started
Epoch 1 of 1: 0%
0/1 [00:00<?, ?it/s]
Running Epoch 0 of 1: 0%
0/9062 [00:00<?, ?it/s]


---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-1-76a0ed631cea> in <module>()
     27 # )
     28 
---> 29 model.train_model("/content/drive/My Drive/dataset_contract/train.txt", eval_file="/content/drive/My Drive/dataset_contract/test.txt")
     30 
     31 model.eval_model("/content/drive/My Drive/dataset_contract/test.txt")

1 frames
/usr/local/lib/python3.6/dist-packages/simpletransformers/language_modeling/language_modeling_model.py in train(self, train_dataset, output_dir, show_running_loss, eval_file, verbose, **kwargs)
    564                 if args.fp16:
    565                     with amp.autocast():
--> 566                         outputs = model(**inputs)
    567                         # model outputs are always tuple in pytorch-transformers (see doc)
    568                         loss = outputs[0]

TypeError: ElectraForLanguageModelingModel object argument after ** must be a mapping, not Tensor


  • Google Colab
  • transformers version: 3.1.0
  • Platform: Linux-4.19.112±x86_64-with-Ubuntu-18.04-bionic
  • Python version: 3.6.9
  • PyTorch version (GPU?): 1.6.0+cu101 (True)
  • Tensorflow version (GPU?): 2.3.0 (True)
  • Using GPU in script?: True

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
capronecommented, Sep 21, 2020

in train_args dict set: “fp16”: False

0reactions
stale[bot]commented, Dec 12, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Read more comments on GitHub >

github_iconTop Results From Across the Web

TypeError: DebertaV2ForQuestionAnswering object argument ...
Hi, Iam getting the above type error in my inference code where i input the file ... object argument after ** must be...
Read more >
Why this unpacking of arguments does not work?
The ** syntax requires a mapping (such as a dictionary); each key-value pair in the mapping becomes a keyword argument. Your generate() function,...
Read more >
ModelBase object argument after ** must be a mapping, not ...
Hello guys! When a try sync this repo http://vault.centos.org/5.11/os/x86_64/ i recieve this error message: ModelBase object argument after ...
Read more >
format() argument after ** must be a mapping, not list
Create a function named string_factory that accepts a list of dictionaries and a string. Return a new list build by using .format() on...
Read more >
type object argument after ** must be a mapping, not str
Traceback (most recent call last): File "/usr/local/bin/rasa", line 11, in <module> sys.exit(main()) File ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found