Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
raw
history blame contribute delete
225 Bytes
You can speed up map by setting batched=True to process multiple elements of the dataset at once:
py
tokenized_imdb = imdb.map(preprocess_function, batched=True)
Now create a batch of examples using [DataCollatorWithPadding].