Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales.