question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

einsum operation in Linear Attention Part

See original GitHub issue

Hi, Thanks a lot for your FLASH_pytorch, which helps a lot. I found that there are some differences from the paper in the Linear Attention Part: https://github.com/lucidrains/FLASH-pytorch/blob/main/flash_pytorch/flash_pytorch.py#L342-L343

lin_kv = einsum('b g n d, b g n e -> b d e', lin_k, v) / n
lin_out = einsum('b g n d, b d e -> b g n e', lin_q, lin_kv)

the lin_kv is three-dim (bde) And the code in the paper is

lin_kv = tf.einsum('bhke,bgh→bgke', lin_kv, mask) 
linear = tf.einsum('bgnk,bgke→bgne', lin_q, lin_kv)

the lin_kv is four-dim (bgke) It seems that the two ways are not equivalent.

Looking forward to your reply. Best,

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
ShomyLiucommented, Jun 19, 2022

Hi, It indeed that there is a reduction for all groups. However, in the final page Code 8: Pseudocode for FLASH, there is no reduction for groups. So maybe both are OK. (In my opinion, if there is a sum reduction for all groups, the attention results would be quite larger than the quad_part?)

1reaction
wangleiofficialcommented, Jun 19, 2022

When I read this part of the expressions and formulas, it should be that the reduction is the group dimension.

image image

Read more comments on GitHub >

github_iconTop Results From Across the Web

Understanding einsum for Deep learning: implement a ...
Learn about the einsum notation and einops by coding a custom multi-head self-attention unit and a transformer block.
Read more >
Einsum is All you Need - Einstein Summation in Deep Learning
Einsum notation is an elegant way to express all of these, as well as complex operations on tensors, using essentially a domain-specific ...
Read more >
python - Understanding NumPy's einsum
So here, the indexing operation on A lines up the first axes of the two arrays so that the multiplication can be broadcast....
Read more >
Fast Transformer Decoding: One Write-Head is All You Need
A neural attention function takes a single query-vector q and a set ... Our code samples use einsum notation, as defined in TensorFlow...
Read more >
`einsum` is ~20X slower than manually multiplying and ...
Bug A manual multiplication and summation (a * b).sum(dim = (-3, -2, -1)) is about 20X faster than the equivalent einsum.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found