Model Card for Model ID

This modelcard aims to be a base template for new models. It has been generated using this raw template.

Model Details

Model Description

  • Developed by: Adham
  • Model type: Instruction-tuned chatbot (causal LM)
  • Language(s) (NLP): English
  • License: LLaMA 3.2 Community License
  • Finetuned from model: meta-llama/Llama-3.2-3B

Uses

Direct Use

This model is intended to serve as an intelligent assistant within a fitness application. It helps users by providing personalized fitness advice, answering health-related questions, and recommending routines or meals.

Out-of-Scope Use

Not suitable for medical diagnosis, mental health counseling, or advice on serious conditions. Avoid using it in high-risk applications.

Training Details

Training Data

[19,000 instruction-response pairs about fitness, nutrition, and health. Data was cleaned, normalized, and tokenized to fit LLaMA format.]

Preprocessing [optional]

[Normalization, de-duplication, and token formatting.]

Training Hyperparameters

  • Training regime: [Fine-tuning method: Parameter-efficient LoRA (r=8, alpha=16, dropout=0.05), applied to q_proj, k_proj, v_proj, o_proj.]

Speeds, Sizes, Times [optional]

Quantization: 4-bit using bitsandbytes

Optimizer: AdamW (β1=0.9, β2=0.999, ε=1e-8)

Training time: ~441 seconds

Total FLOPs: 1.53 quadrillion

Evaluation

Testing Data, Factors & Metrics

Metrics

train_loss: 1.5597

mean_token_accuracy: 65.45% (first 100 steps)

Testing Data

Not explicitly tested yet – training metrics only.

Summary

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

Downloads last month
8
Safetensors
Model size
3.21B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for domamostafa/Fitness-Assistance

Adapter
(184)
this model

Collection including domamostafa/Fitness-Assistance