Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
Load DistilBERT with [AutoModelForTokenClassification] along with the number of expected labels, and the label mappings:
from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer
model = AutoModelForTokenClassification.from_pretrained(
"distilbert/distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id
)
At this point, only three steps remain:
Define your training hyperparameters in [TrainingArguments].