Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a
different pretraining scheme.