Transformers
Safetensors
English
text-generation-inference
unsloth
llama
trl

🫐🥫 trained_adapter

logo

Model Details

This is a LoRA adapter for Moecule family of MoE models.

It is part of Moecule Ingredients and all relevant expert models, LoRA adapters, and datasets can be found there.

Additional Information

  • QLoRA 4-bit fine-tuning with Unsloth
  • Base Model: unsloth/llama-3-8b-Instruct

The Team

  • CHOCK Wan Kee
  • Farlin Deva Binusha DEVASUGIN MERLISUGITHA
  • GOH Bao Sheng
  • Jessica LEK Si Jia
  • Sinha KHUSHI
  • TENG Kok Wai (Walter)

References

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for davzoku/trained_expert_adapter

Finetuned
(86)
this model

Datasets used to train davzoku/trained_expert_adapter

Collection including davzoku/trained_expert_adapter