To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: | |
from transformers import create_optimizer | |
batch_size = 16 | |
num_train_epochs = 2 | |
total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs | |
optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps) | |
Then you can load BERT with [TFAutoModelForMultipleChoice]: | |
from transformers import TFAutoModelForMultipleChoice | |
model = TFAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased") | |
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]: | |
data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer) | |
tf_train_set = model.prepare_tf_dataset( | |
tokenized_swag["train"], | |
shuffle=True, | |
batch_size=batch_size, | |
collate_fn=data_collator, | |
) | |
tf_validation_set = model.prepare_tf_dataset( | |
tokenized_swag["validation"], | |
shuffle=False, | |
batch_size=batch_size, | |
collate_fn=data_collator, | |
) | |
Configure the model for training with compile. |