question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

should both: attention_mask and global_attention_mask be used for classification?

See original GitHub issue

Hi,

Again a conceptual question on text classification.

Since global attention is used on <s> only, I am slightly confused if I should just pass global_attention_mask to the model or both: attention_mask and global_attention_mask. I follow that attention_mask is mainly used to mask the <pad> tokens but does it mean n 2 complexity for local attention?

Thanks!

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:11

github_iconTop GitHub Comments

2reactions
ibeltagycommented, Jun 25, 2020

As the docstring here says: attention_mask: some attention or no attention global_attention_mask: local attention or global attention

Check here for how we merge both masks into the {0, 1, 2} mask.

1reaction
ibeltagycommented, Sep 24, 2020

@mihaidobri, you are right, sorry, reopened it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Longformer — transformers 3.0.2 documentation - Hugging Face
Longformer self attention employs self attention on both a “local” ... For example, for classification, the <s> token should be given global attention....
Read more >
Isn't attention mask for BERT model useless?
In the tutorial, it clearly states that an attention mask is needed to tell the model (BERT) which input ids need to be...
Read more >
Attention Mechanism, Transformers, BERT, and GPT - OSF
Abstract. This is a tutorial and survey paper on the atten- tion mechanism, transformers, BERT, and GPT. We first explain attention mechanism, sequence-....
Read more >
AttentionRNN: A Structured Spatial Attention Mechanism
Attention mecha- nisms differ on how much information they use to compute the attention mask. They can be global, that is use all...
Read more >
An Overview of the Attention Mechanisms in Computer Vision
learning and visual attention mechanisms concentrates on the use of mask. ... In neural networks, the weight of attention can be learned through....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found