File size: 669 Bytes
0eba1d0 5de8fbb 3046f76 0eba1d0 cb06508 ad47c29 0eba1d0 ad47c29 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
---
license: apache-2.0
datasets:
- HuggingFaceFW/fineweb-2
language:
- fo
base_model:
- HuggingFaceTB/SmolLM2-135M-Instruct
library_name: peft
pipeline_tag: text-generation
---
This is a SmolLM2-135M-Instruct model fine-tuned on the Faroese portion of Fineweb-2. It is intended for my research and has not been evaluated more broadly yet.
LoRA setup:
- Rank: 256
- Alpha: 512
- Target modules: ["up_proj", "down_proj", "gate_proj", "o_proj"]
Training:
- 1 Epoch
- Learning rate: 8e-4
- LR scheduler: Cosine
- Warmup ratio: 0.05
- Batch size: 1
- 4 A100 (40GB) GPUs
- Gradient accumulation steps: 64
- Effective batch size: 256
- Max. context length: 8192 tokens |