question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

TypeError in tensorflow/run_summarization.py

See original GitHub issue

Environment info

  • transformers version: 4.11.0.dev0
  • Platform: Linux-5.4.0-1055-azure-x86_64-with-glibc2.10
  • Python version: 3.8.1
  • PyTorch version (GPU?):
  • Tensorflow version (GPU?): 2.5.0 (Yes)
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: Distributed

Who can help:

@patrickvonplaten, @patil-suraj, @Rocketknight

Models: facebook/bart datasets: xsum

  • the official example scripts: (give details below) run_summarization.py relative path: examples/tensorflow/summarization

Steps to reproduce the behavior: (Note that --max_train_samples is optional) python run_summarization.py --model_name_or_path facebook/bart-base --dataset_name xsum --dataset_config “3.0.0” --output_dir /tmp/tst-summarization --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --num_train_epochs 3 --do_train --do_eval --max_train_samples 100

Error message

  • INFO - main - Evaluation… 0%| | 0/2833 [00:01<?, ?it/s] Traceback (most recent call last): File “run_summarization.py”, line 663, in <module> main() File “run_summarization.py”, line 639, in main generated_tokens = model.generate(**batch) File “/mnt/batch/tasks/shared/LS_root/mounts/clusters/pbodigut1/code/Users/pbodigut/transformers/v-4.10/transformers/src/transformers/generation_tf_utils.py”, line 736, in generate output = self._generate_beam_search( File “/mnt/batch/tasks/shared/LS_root/mounts/clusters/pbodigut1/code/Users/pbodigut/transformers/v-4.10/transformers/src/transformers/generation_tf_utils.py”, line 1102, in _generate_beam_search model_inputs = self.prepare_inputs_for_generation( TypeError: prepare_inputs_for_generation() got multiple values for argument ‘decoder_input_ids’

Expected behavior

Successfully run the evaluation step.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:2
  • Comments:9 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
patrickvonplatencommented, Oct 22, 2021

@Rocketknight1 - could you take a look? 😃

0reactions
github-actions[bot]commented, Dec 31, 2021

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Error in python after 'import tensorflow': TypeError
Several users have reported issues that arise when an older version of protobuf is installed. TensorFlow requires (and uses a copy of) ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found