Potential bug in the tutorial

#1
by wkokng - opened

The code below seems to have a bug.

# concatinate all attention heads
att_output = att_output.contiguous().view(batch_size, seq_len, self.num_head*self.d_head) 

Should apply transpose att_output.transpose(1, 2) to align the dims before view.

Hi wkokng,
Thanks for your attention to detail!
The transpose operation isn't needed at this point because the dimensions are already correctly aligned from the earlier transpose(1,2) operations when setting up the Q, K, and V states:

#[batch_size, self.num_head, seq_len, self.d_head]
        Q_state = Q_state.view(batch_size, seq_len, self.num_head, self.d_head).transpose(1,2) 
            
        # in cross-attention, key/value sequence length might be different from query sequence length
        K_state = K_state.view(batch_size, kv_seq_len, self.num_head, self.d_head).transpose(1,2)
        V_state = V_state.view(batch_size, kv_seq_len, self.num_head, self.d_head).transpose(1,2)

        # Scale Q by 1/sqrt(d_k)
        Q_state = Q_state * self.scaler

This ensures the attention computation and subsequent reshaping work correctly.

Please correct me if I'm wrong:

attn_output is of shape [batch_size, num_head, seq_len, d_head] after

att_output = torch.matmul(att_score, V_state)

Then when applying the following, view would miss up with the matrix since the actual shape dims do not align with the shape specified in view (Note seq_len is the third dim in att_output, while the second in view )

att_output = att_output.contiguous().view(batch_size, seq_len, self.num_head*self.d_head)

We need to transpose att_output to [batch_size, seq_len, num_head, d_head] before view to resolves the misalignment.

att_output.transpose(1, 2)

Hi @wkokng ,

Thank you so much for your persistence in pointing this out! You were absolutely right about the dimension handling. I've created a test script to verify this and indeed, while both approaches yield tensors of the correct shape, the transpose is necessary to maintain proper head dimension grouping in the final output. Without the transpose, the head information gets incorrectly interleaved.

I'm updating the code to:

# concatenate all attention heads
att_output = att_output.transpose(1, 2)
att_output = att_output.contiguous().view(batch_size, seq_len, self.num_head*self.d_head)

This ensures the head dimensions stay properly grouped when they're combined. Thank you for helping make this implementation more correct!

bird-of-paradise changed discussion status to closed

Sign up or log in to comment