should both: attention_mask and global_attention_mask be used for classification?
See original GitHub issueHi,
Again a conceptual question on text classification.
Since global attention is used on <s>
only, I am slightly confused if I should just pass global_attention_mask to the model or both: attention_mask and global_attention_mask. I follow that attention_mask is mainly used to mask the <pad>
tokens but does it mean n
2 complexity for local attention?
Thanks!
Issue Analytics
- State:
- Created 3 years ago
- Comments:11
Top Results From Across the Web
Longformer — transformers 3.0.2 documentation - Hugging Face
Longformer self attention employs self attention on both a “local” ... For example, for classification, the <s> token should be given global attention....
Read more >Isn't attention mask for BERT model useless?
In the tutorial, it clearly states that an attention mask is needed to tell the model (BERT) which input ids need to be...
Read more >Attention Mechanism, Transformers, BERT, and GPT - OSF
Abstract. This is a tutorial and survey paper on the atten- tion mechanism, transformers, BERT, and GPT. We first explain attention mechanism, sequence-....
Read more >AttentionRNN: A Structured Spatial Attention Mechanism
Attention mecha- nisms differ on how much information they use to compute the attention mask. They can be global, that is use all...
Read more >An Overview of the Attention Mechanisms in Computer Vision
learning and visual attention mechanisms concentrates on the use of mask. ... In neural networks, the weight of attention can be learned through....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
As the docstring here says:
attention_mask
: some attention or no attentionglobal_attention_mask
: local attention or global attentionCheck here for how we merge both masks into the {0, 1, 2} mask.
@mihaidobri, you are right, sorry, reopened it.