Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
from transformers import create_optimizer
batch_size = 16
num_epochs = 2
total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs
optimizer, schedule = create_optimizer(
init_lr=2e-5,
num_warmup_steps=0,
num_train_steps=total_train_steps,
)
Then you can load DistilBERT with [TFAutoModelForQuestionAnswering]:
from transformers import TFAutoModelForQuestionAnswering
model = TFAutoModelForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased")
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
tf_train_set = model.prepare_tf_dataset(
tokenized_squad["train"],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
tf_validation_set = model.prepare_tf_dataset(
tokenized_squad["test"],
shuffle=False,
batch_size=16,
collate_fn=data_collator,
)
Configure the model for training with compile:
import tensorflow as tf
model.compile(optimizer=optimizer)
The last thing to setup before you start training is to provide a way to push your model to the Hub.