Model Information
This repository contains a Llama-3.2-3B model finetuned on the GSM8K dataset for solving math word problems.
Model Details
- Base Model: The model was finetuned from
unsloth/Llama-3.2-3B-bnb-4bit
. - Finetuning Method: QLoRA (Quantized Low-Rank Adaptation) was used for efficient finetuning on a 4-bit quantized base model.
- Dataset: The model was finetuned on the
train
split of the openai/gsm8k dataset, which consists of math word problems and their step-by-step solutions. Approximately 2000 examples were used for finetuning. - Output: The finetuned model is designed to generate detailed solutions to arithmetic and mathematical reasoning problems.
- Precision: The model is saved and available as a merged 16-bit precision model.
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 10