Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
Model Details
Model Description
- Developed by: Adham
- Model type: Instruction-tuned chatbot (causal LM)
- Language(s) (NLP): English
- License: LLaMA 3.2 Community License
- Finetuned from model: meta-llama/Llama-3.2-3B
Uses
Direct Use
This model is intended to serve as an intelligent assistant within a fitness application. It helps users by providing personalized fitness advice, answering health-related questions, and recommending routines or meals.
Out-of-Scope Use
Not suitable for medical diagnosis, mental health counseling, or advice on serious conditions. Avoid using it in high-risk applications.
Training Details
Training Data
[19,000 instruction-response pairs about fitness, nutrition, and health. Data was cleaned, normalized, and tokenized to fit LLaMA format.]
Preprocessing [optional]
[Normalization, de-duplication, and token formatting.]
Training Hyperparameters
- Training regime: [Fine-tuning method: Parameter-efficient LoRA (r=8, alpha=16, dropout=0.05), applied to q_proj, k_proj, v_proj, o_proj.]
Speeds, Sizes, Times [optional]
Quantization: 4-bit using bitsandbytes
Optimizer: AdamW (β1=0.9, β2=0.999, ε=1e-8)
Training time: ~441 seconds
Total FLOPs: 1.53 quadrillion
Evaluation
Testing Data, Factors & Metrics
Metrics
train_loss: 1.5597
mean_token_accuracy: 65.45% (first 100 steps)
Testing Data
Not explicitly tested yet – training metrics only.
Summary
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [Insert your GPU/CPU model]
- Hours used: [<1 hour]
- Compute Region: [Your cloud provider or local]
- Carbon Emitted: [Use MLCO2 Calculator] Tokenizerfile link https://drive.google.com/drive/folders/1aZevH_EC7FsfCm4vQnYScTCiG_SCPcIC?usp=sharing
- Downloads last month
- 8
Model tree for domamostafa/Fitness-Assistance
Base model
meta-llama/Llama-3.2-3B