laft_bur_mpt / README.md
ChrisToukmaji's picture
Update README.md
b01d9ff verified
metadata
license: apache-2.0
base_model: mosaicml/mpt-7b
tags:
  - generated_from_trainer
datasets:
  - mc4
model-index:
  - name: laft_bur_mpt
    results: []

Paper and Citation

Paper: Prompt, Translate, Fine-Tune, Re-Initialize, or Instruction-Tune? Adapting LLMs for In-Context Learning in Low-Resource Languages

@misc{toukmaji2025prompttranslatefinetunereinitialize,
      title={Prompt, Translate, Fine-Tune, Re-Initialize, or Instruction-Tune? Adapting LLMs for In-Context Learning in Low-Resource Languages}, 
      author={Christopher Toukmaji and Jeffrey Flanigan},
      year={2025},
      eprint={2506.19187},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.19187}, 
}

laft_bur_mpt

This model is a fine-tuned version of mosaicml/mpt-7b on the mc4 my dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7895

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 2000
  • num_epochs: 6.0

Training results

Training Loss Epoch Step Validation Loss
0.7578 1.0 24415 0.9516
0.5781 2.0 48830 0.8928
0.6641 3.0 73245 0.8362
0.4883 4.0 97660 0.7911
0.6133 5.0 122075 0.7633
0.459 6.0 146490 0.7895

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1