Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
By
reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on
standard natural language understanding tasks with the same number of parameters during fine-tuning.