File size: 502 Bytes
5fa1a76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
from transformers import TrainingArguments
REPLACE THIS WITH YOUR REPO ID
repo_id = "MariaK/layoutlmv2-base-uncased_finetuned_docvqa"
training_args = TrainingArguments(
     output_dir=repo_id,
     per_device_train_batch_size=4,
     num_train_epochs=20,
     save_steps=200,
     logging_steps=50,
     evaluation_strategy="steps",
     learning_rate=5e-5,
     save_total_limit=2,
     remove_unused_columns=False,
     push_to_hub=True,
 )

Define a simple data collator to batch examples together.