Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
raw
history blame contribute delete
362 Bytes
Finally, by scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised NLP tasks ranging from language generation (with automated and human evaluation), language understanding, text classification, question answering, commonsense reasoning, long text reasoning, structured knowledge grounding and information retrieval.