Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
For tasks such as classification, this is not a problem, but for tasks like masked language modeling or token classification, we need a hidden state with the same sequence length as the original input.