Question on maskings
See original GitHub issueHi @Kyubyong,
Can you help explain a bit on the following masking codes (the Key Masking and Query Masking) in the modules.py? Why we need them? We only need the causality, right?
# Key Masking
key_masks = tf.sign(tf.abs(tf.reduce_sum(keys, axis=-1))) # (N, T_k)
key_masks = tf.tile(key_masks, [num_heads, 1]) # (h*N, T_k)
key_masks = tf.tile(tf.expand_dims(key_masks, 1), [1, tf.shape(queries)[1], 1]) # (h*N, T_q, T_k)
paddings = tf.ones_like(outputs)*(-2**32+1)
outputs = tf.where(tf.equal(key_masks, 0), paddings, outputs) # (h*N, T_q, T_k)
# Query Masking
query_masks = tf.sign(tf.abs(tf.reduce_sum(queries, axis=-1))) # (N, T_q)
query_masks = tf.tile(query_masks, [num_heads, 1]) # (h*N, T_q)
query_masks = tf.tile(tf.expand_dims(query_masks, -1), [1, 1, tf.shape(keys)[1]]) # (h*N, T_q, T_k)
outputs *= query_masks # broadcasting. (N, T_q, C)
Thanks!
Issue Analytics
- State:
- Created 6 years ago
- Comments:7
Top Results From Across the Web
7 questions patients might ask about managing risk and ...
Table of Contents · Why is it important to wear a mask? · How often can I use the same mask? · Who...
Read more >The face-mask question: A survey - Theo Dawson
Here are some of our research questions. How do different considerations relate to different levels of reported face-mask wearing behavior?
Read more >Mask Mythbusters: Common Questions about Kids & Face ...
Do masks make it harder for my child to breathe ? · Can masks interfere with a child's lung development? · Do masks...
Read more >Promoting mask-wearing during the COVID-19 pandemic ...
Annex IV: Sample questionnaire on mask usage ... asking you questions relating to COVID-19 and actions to reduce the spread of COVID-19 with...
Read more >Coronavirus disease (COVID-19): Children and masks
These questions and answers were developed by WHO and UNICEF. Each country is facing a different situation in the pandemic with changing ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@haoransh When you pass inputs as an argument to function positional_encoding, yes, the inputs consists of padding info. However, inside of positional_encoding, this code just extract the shape info of inputs , without padding info. That means, the zero embedding vector of lookup table of positional_encoding is NOT the same thing as the lookup table of the word embedding.
This would result in the position-encoded padding embedding-vector non-zeros, let’s take an example to make it clear. if T = maxlen = 6 and input sentence ‘This mask simply fail’, we get:
x = [[index_this, index_mask, index_simply, index_fail, 3, 0]] shape(1, T), 3 represent ‘<\S>’ and 0 ‘<\PAD>’, and ‘\PAD’ is at 6th position
word embedding x_embedding = [[ [not-all-zeros], [not-all-zeros],…[0, 0, …, 0]]] shape(1, T, len(word embedding vector))
positional embedding x_position = [[0, 1, 2, 3, 4, 5, 6]] if zero_pad = True x_position_embedding = [[[0, 0, …, 0], [not-all-zeros], …, [not-all-zeros]]] shape(1, T, len(positional embedding vector))
now, let’s add the embeddings. The lengths of both embedding vectors are the same. x_embedding + x_position_embedding = [[ [not-all-zeros], [not-all-zeros], …, [not-all-zeros]]]
So that the mask simply does not fulfill its original purpose to find out the paddings.
@gitfourteen @jiangxinyang227 Yes, this issue has been posted here https://github.com/Kyubyong/transformer/issues/33 before.
So this repo can only serve as a toy example, not the same as the original implementation in tensor2tensor. Also if you are interested, you can also refer to another tensorflow implementation here, which is the same as the original implementation but much easier to follow than tensor2tensor.