File size: 150 Bytes
5fa1a76
 
 
1
2
3
Our BERTweet, having
the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et
al., 2019).