--- library_name: peft tags: - generated_from_trainer base_model: meta-llama/Llama-2-7b-chat-hf model-index: - name: '20240328_1004' results: [] --- # araft_trained_sft This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the [Araft dataset](https://huggingface.co/datasets/FDeRubeis/araft). ## Model description This model has been generated in the context of the [Araft](https://github.com/FDeRubeis/Araft) project. The Araft project consists in fine-tuning a Llama2-7B model to enable the use of the [ReAct](https://arxiv.org/abs/2210.03629) pattern for Wikipedia-augmented question-answering. This model is the product of the first training step: SFT training. In the SFT training step, the trajectories from the [Araft dataset](https://huggingface.co/datasets/FDeRubeis/araft) have been used to fine-tune the model, using each step as a desired output for the previous part of the trajectory. The model achieves a 16% performace (f1 score) on the [HotpotQA dataset](https://hotpotqa.github.io/). For further information, please see the [Araft](https://github.com/FDeRubeis/Araft) github repo. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2