question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

About Channel-Wise Attention(CWA) in the code.

See original GitHub issue

Great work!But I have little question about CWA.In the origianl paper, I see M_i = diag(Mask_i), where diag is putting a vector on the princial diagonal of a diagonal matrix.But in the code below:

foo = [1] * 2 + [0] *  1
bar = []
for i in range(200):
    random.shuffle(foo)
    bar += foo
bar = [bar for i in range(nb_batch)]

I think bar is not a diagonal matrix. Please point out my problem if I misunderstood the operation here.Thanks a lot.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
dongliangchangcommented, Aug 6, 2020

Yes. Please see Section III.A The Discriminality Component for details.

0reactions
AND2797commented, Aug 6, 2020
foo = [1] * 2 + [0] *  1
bar = []
for i in range(200):
    random.shuffle(foo)
    bar += foo
bar = [bar for i in range(nb_batch)]

I had a small question about the code, here foo = [1] * 2 + [0] * 1 is only good for 3 channels correct? If we want to increase the number of channels (say 5), then foo = [1] * 3 + [0] * 2 is necessary, correct?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Channel-wise Soft Attention Explained
Channel-wise Soft Attention is an attention mechanism in computer vision that assigns "soft" attention weights for each channel $c$. In soft channel-wise ...
Read more >
(PDF) Bi-Modal Learning With Channel-Wise Attention for ...
In this paper, we propose a novel CNN-RNN-based model, bi-modal multi-label learning(BMML) framework. Firstly, an improved channel-wise attention mechanism is ...
Read more >
Mutual-Channel Loss for Fine-Grained Image Classification
A novel channel attention mechanism is introduced, whereby during training a fixed percentage of channels is randomly masked out, ...
Read more >
Attacks on state-of-the-art face recognition using attentional ...
So A 3 GN pays more attention to the exploration of feature representation for faces. ... CWA means channel-wise attention.
Read more >
Triple attention learning for classification of 14 thoracic ...
Specifically, the channel-wise attention prompts the deep model to emphasize the discriminative channels of feature maps; the element-wise attention enables ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found