Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
MEGA proposes a new approach to self-attention with each encoder layer having a multi-headed exponential moving average in addition to a single head of standard dot-product attention, giving the attention mechanism
stronger positional biases.