Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
Tried to allocate 256.00 MiB (GPU 0; 11.17 GiB total capacity; 9.70 GiB already allocated; 179.81 MiB free; 9.85 GiB reserved in total by PyTorch)
Here are some potential solutions you can try to lessen memory use:
Reduce the per_device_train_batch_size value in [TrainingArguments].