File size: 136 Bytes
5fa1a76
1
Trained by distillation of the pretrained BERT model, meaning it’s been trained to predict the same probabilities as the larger model.