question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ReformerForQuestionAnswering : int() argument must be a string, a bytes-like object or a number, not 'NoneType'

See original GitHub issue

Environment info

  • transformers version:
  • Platform:
  • Python version: 3.7.10
  • PyTorch version (GPU?): 1.7
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No

Who can help

@patrickvonplaten

Information

Model I am using (Bert, XLNet …): Reformer

The problem arises when using:

  • my own modified scripts: performing a backward() after passing the query and text to the ReformerForQuestionAnswering model.

The tasks I am working on is:

  • an official GLUE/SQUaD task: a subset of SQuAD

To reproduce

Steps to reproduce the behavior:

Performing backward on the loss throwing an error.

Minimal code to reproduce the error.

from transformers import ReformerTokenizer, ReformerForQuestionAnswering
import torch

tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')

question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])

outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
loss.backward()

Error Traceback

create_graph)
    219                 retain_graph=retain_graph,
    220                 create_graph=create_graph)
--> 221         torch.autograd.backward(self, gradient, retain_graph, create_graph)
    222 
    223     def register_hook(self, hook):

/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
    130     Variable._execution_engine.run_backward(
    131         tensors, grad_tensors_, retain_graph, create_graph,
--> 132         allow_unreachable=True)  # allow_unreachable flag
    133 
    134 

/usr/local/lib/python3.7/dist-packages/torch/autograd/function.py in apply(self, *args)
     87     def apply(self, *args):
     88         # _forward_cls is defined by derived class
---> 89         return self._forward_cls.backward(self, *args)  # type: ignore
     90 
     91 

/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py in backward(***failed resolving arguments***)
   1673                 head_mask=head_mask[len(layers) - idx - 1],
   1674                 attention_mask=attention_mask,
-> 1675                 buckets=buckets,
   1676             )
   1677 

/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py in backward_pass(self, next_attn_output, hidden_states, grad_attn_output, grad_hidden_states, attention_mask, head_mask, buckets)
   1527 
   1528             # set seed to have correct dropout
-> 1529             torch.manual_seed(self.feed_forward_seed)
   1530             # g(Y_1)
   1531             res_hidden_states = self.feed_forward(next_attn_output)

/usr/local/lib/python3.7/dist-packages/torch/random.py in manual_seed(seed)
     30             `0xffff_ffff_ffff_ffff + seed`.
     31     """
---> 32     seed = int(seed)
     33     import torch.cuda
     34 

TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

From debugging, I believe that the error was caused because the self.feed_forward_seed in ReformerLayer class is None.

I have tried the same code with Longformer and it was working perfectly.

Expected behavior

loss.backward() running properly.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:7 (7 by maintainers)

github_iconTop GitHub Comments

2reactions
forest1988commented, Mar 17, 2021

Excuse me for my frequent posting.

Instead of overwriting position_embeddings, inserting model.train() seems to work (but with another issue).

from transformers import ReformerTokenizer, ReformerForQuestionAnswering
from transformers.models.reformer.modeling_reformer import PositionEmbeddings
import torch

tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')

# # change to position embeddings to prevent error
# model.reformer.embeddings.position_embeddings = PositionEmbeddings(model.config)

model.train()

question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])

outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss

loss.backward()

The different error message is shown, but it seems can be treated by just doing padding.

~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/reformer/modeling_reformer.py in forward(self, position_ids)
    154 
    155         if self.training is True:
--> 156             assert (
    157                 reduce(mul, self.axial_pos_shape) == sequence_length
    158             ), "If training, make sure that config.axial_pos_shape factors: {} multiply to sequence length. Got prod({}) != sequence_length: {}. You might want to consider padding your sequence length to {} or changing config.axial_pos_shape.".format(

AssertionError: If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 28. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape.

I’m now trying padding the input, and it seems working.

tokenizer.pad_token = tokenizer.eos_token

inputs = tokenizer(question, text, padding='max_length', truncation=True, max_length=524288, return_tensors='pt')

I apologize if this is not an appropriate solution.

1reaction
patrickvonplatencommented, Mar 29, 2021

We could maybe add a better error message that fires when Reformer is not in training mode, but one runs .backward(). @forest1988 if you want feel free to open a PR 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

TypeError: int() argument must be a string, a bytes-like object ...
int() argument must be a string, a bytes-like object, or a number, not 'NoneType'. According to the error, you can't convert strings type...
Read more >
What does 'int() argument must be a string, a bytes like object ...
The error "int() argument must be a string, a bytes like object or a number, not 'NoneType'. Error saving date" means that the...
Read more >
int() argument must be a string or real number not NoneType
The Python "TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'" occurs when we pass a...
Read more >
Loss.backward() on Huggingface Reformer model gives error
The backward() call throws the following error. TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'.
Read more >
int() argument must be a string, a bytes-like object or a number ...
Django : TypeError: int () argument must be a string, a bytes-like object or a number, not 'list' [ Beautify Your Computer ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found