Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
These input embeddings are learnt positional encodings that the authors refer to as object queries, and similarly to
the encoder, they are added to the input of each attention layer.