question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RuntimeError: Overflow when unpacking long during training the model

See original GitHub issue

Hi I am training the model for custom dataset for QnA task. I have transformers version 4.0.0 and pytorch version 1.7.1. with the following code, I am getting the issue.

trainer = Trainer(
    model=model,                         # the instantiated 🤗 Transformers model to be trained
    args=training_args,                  # training arguments, defined above
    train_dataset=train_dataset,         # training dataset
          # evaluation dataset
)
trainer.train()

Error is below:

RuntimeError                              Traceback (most recent call last)
<ipython-input-16-3435b262f1ae> in <module>
----> 1 trainer.train()

~/.local/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path, trial)
    727             self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control)
    728 
--> 729             for step, inputs in enumerate(epoch_iterator):
    730 
    731                 # Skip past any already trained steps if resuming training

~/.local/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
    433         if self._sampler_iter is None:
    434             self._reset()
--> 435         data = self._next_data()
    436         self._num_yielded += 1
    437         if self._dataset_kind == _DatasetKind.Iterable and \

~/.local/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
    473     def _next_data(self):
    474         index = self._next_index()  # may raise StopIteration
--> 475         data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
    476         if self._pin_memory:
    477             data = _utils.pin_memory.pin_memory(data)

~/.local/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
     42     def fetch(self, possibly_batched_index):
     43         if self.auto_collation:
---> 44             data = [self.dataset[idx] for idx in possibly_batched_index]
     45         else:
     46             data = self.dataset[possibly_batched_index]

~/.local/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)
     42     def fetch(self, possibly_batched_index):
     43         if self.auto_collation:
---> 44             data = [self.dataset[idx] for idx in possibly_batched_index]
     45         else:
     46             data = self.dataset[possibly_batched_index]

<ipython-input-7-80744e22dabe> in __getitem__(self, idx)
      6 
      7     def __getitem__(self, idx):
----> 8         return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
      9 
     10     def __len__(self):

<ipython-input-7-80744e22dabe> in <dictcomp>(.0)
      6 
      7     def __getitem__(self, idx):
----> 8         return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
      9 
     10     def __len__(self):

RuntimeError: Overflow when unpacking long

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
isha-mohancommented, Jul 6, 2021

Hi,

I am using transformers version 4.0.0 and pytorch version 1.6.0. I am getting the same error.

0reactions
LysandreJikcommented, Dec 22, 2021

In order to get help faster, please also include all that is asked in the issue template, with the model, dataset used, all software versions as prompted by the template. Thanks!

Read more comments on GitHub >

github_iconTop Results From Across the Web

RuntimeError: Overflow when unpacking long · Issue #364
I am training a GPT2 model using Pytorch run_clm_no_trainer.py. Error. Below error happen when model is saving checkpoints. But seem that it ...
Read more >
Pytorch training QA error overflow when unpacking long
I am running this code on keras from torch.utils.data import DataLoader from transformers import AdamW from tqdm import tqdm # setup GPU/CPU ...
Read more >
Overflow when unpacking long, during FX mode calibration ...
Hello, I am following FX mode post training static quantization ... RuntimeError: Overflow when unpacking long, during FX mode calibration.
Read more >
Loading a Pytorch model? by Joe Bastulli - QuantConnect.com
agent.load_state_dict(torch.as_tensor(path)). I keep getting a runtime error when I try to load the path. RuntimeError : Overflow when unpacking long.
Read more >
DeepStability - A Database of Numerical Methods for Deep ...
Index Library Commit hash Language Type of commit 1 PyTorch ac72881f3ff8c46c2a5cf8b09d02babf46bc4c85 CUDA Fix 2 PyTorch dfc7fa03e5d33f909b9d7853dd001086f5d782a0 Python Fix 3 PyTorch 8e507ad00ebdfd0ae84bc03718e9c2cb74b8573b yaml Fix
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found