Diet Advisor QLoRA

This is a QLoRA (4-bit quantized LoRA) adapter fine-tuned for personalized dietary advice and meal planning conversations.

Model Details

  • Base Model: unsloth/Qwen3-8B-unsloth-bnb-4bit
  • Training Method: QLoRA with Unsloth optimization
  • Dataset: Custom diet advice dataset (1,200 examples)
  • Training Split: 80% training (1,080 examples), 20% validation (120 examples)
  • Training Steps: 100
  • LoRA Rank: 32
  • Target Modules: All linear layers (q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj)

Performance

  • Final Training Loss: 0.3635
  • Final Evaluation Loss: 0.076
  • Training Time: ~4 minutes on A100
  • GPU Memory Usage: ~5.7 GB
  • Samples per Second: 3.57

Usage

from unsloth import FastLanguageModel
from peft import PeftModel

# Load base model
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/Qwen3-8B-unsloth-bnb-4bit",
    max_seq_length=2048,
    dtype=None,
    load_in_4bit=True,
)

# Load adapter
model = PeftModel.from_pretrained(model, "kaushik2202/diet-advisor-qwen-qlora")

# Enable inference mode
FastLanguageModel.for_inference(model)

# Use for diet advice
prompt = """Human: I'm a 30-year-old female seeking dietary advice. I'm interested in Mediterranean cuisine.

**My Health Profile:**
โ€ข Weight: 65kg, Height: 165cm
โ€ข Activity Level: Moderate exercise
โ€ข Health Goals: Weight maintenance
โ€ข Dietary Restrictions: None

Can you suggest a Mediterranean meal plan?"""

# Format for Qwen2.5
formatted_prompt = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"

inputs = tokenizer(formatted_prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

Expected Output Format

The model provides structured dietary analysis with:

  • Age and gender-specific recommendations
  • Professional nutrition formatting
  • Personalized meal planning
  • Health goal considerations
  • Clear dietary guidelines

Example response format:

Assistant: I'll create a personalized Mediterranean meal plan based on your health profile.

## ๐Ÿฝ๏ธ Mediterranean Recommendations for Your Health

**Breakfast:** Greek yogurt with berries and nuts
**Lunch:** Mediterranean salad with grilled chicken
**Dinner:** Baked fish with roasted vegetables

## ๐Ÿ“‹ Age-Specific Tips (30 years old)

โ€ข Focus on nutrient-dense foods for sustained energy
โ€ข Include calcium-rich foods for bone health
โ€ข Balance convenience with nutrition quality

**Remember:** These recommendations are tailored to your profile. Consult with a registered dietitian for detailed meal planning.

Training Details

  • Dataset Size: 1,200 diet consultation examples
  • Training Examples: 1,080 (90%)
  • Validation Examples: 120 (10%)
  • Loss Convergence: 3.15 โ†’ 0.36 (excellent convergence)
  • Evaluation Performance: 0.076 eval loss (strong generalization)
  • Memory Efficiency: 1.05% trainable parameters

Model Architecture

  • Trainable Parameters: 80,740,352
  • Total Parameters: 7,696,356,864
  • Training Efficiency: 1.05% of model parameters trained
  • Quantization: 4-bit with BitsAndBytes
  • LoRA Configuration: Rank 32, Alpha 32, Dropout 0.05

License

This model inherits the Apache 2.0 license from Qwen2.5. Use responsibly for educational and research purposes.

โš ๏ธ Disclaimer: This model is for educational purposes only. Always consult qualified healthcare professionals and registered dietitians for medical advice and personalized nutrition planning.

Citation

If you use this model, please cite:

@model{diet-advisor-qwen-qlora,
  author = {kaushik2202},
  title = {Diet Advisor QLoRA - Personalized Nutrition Assistant},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/kaushik2202/diet-advisor-qwen-qlora}
}

Training Configuration

  • Base Model: Qwen2.5-7B-Instruct (4-bit quantized)
  • Framework: Unsloth + Transformers + PEFT
  • Optimizer: AdamW 8-bit
  • Learning Rate: 2e-4 with linear scheduler
  • Batch Size: 2 (effective batch size: 8 with gradient accumulation)
  • Sequence Length: 2048 tokens
  • Hardware: NVIDIA A100-SXM4-40GB

Use Cases

  • Personalized meal planning
  • Dietary advice consultation
  • Nutrition education
  • Health-conscious recipe suggestions
  • Lifestyle-based food recommendations
Downloads last month
33
GGUF
Model size
87.3M params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for kaushik2202/diet-advisor-qwen-qlora

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Adapter
(20)
this model