Text Classification
Transformers
Safetensors
new
custom_code
kenneth-doh commited on
Commit
8bdecf3
·
verified ·
1 Parent(s): 8d158c9

[bugfix] Initialize attention bias on the same device as Query/Key/Value

Browse files

The attention bias in xformers is currently initialized on the default device, rather than the device of the Q/K/V tensors.
Thus, in a multi-GPU environment, the following error occurs:
```log
Error: Attention bias and Query/Key/Value should be on the same device
query.device: cuda:6
attn_bias : cuda:0
```
This PR resolved the above error.

Note: The same error occurred in vllm and was resolved by the following PR.
https://github.com/vllm-project/vllm/pull/13468

Files changed (1) hide show
  1. modeling.py +1 -1
modeling.py CHANGED
@@ -910,7 +910,7 @@ class NewModel(NewPreTrainedModel):
910
 
911
  batch_size, seq_length = input_shape
912
  if unpad_inputs and self.config.use_memory_efficient_attention:
913
- attention_bias = xops.fmha.attn_bias.BlockDiagonalMask.from_seqlens(length)
914
  else:
915
  # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
916
  # ourselves in which case we just need to make it broadcastable to all heads.
 
910
 
911
  batch_size, seq_length = input_shape
912
  if unpad_inputs and self.config.use_memory_efficient_attention:
913
+ attention_bias = xops.fmha.attn_bias.BlockDiagonalMask.from_seqlens(length, device=embedding_output.device)
914
  else:
915
  # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
916
  # ourselves in which case we just need to make it broadcastable to all heads.