Trendyol LLM 7B chat v4.1.0 model available in multiple GGUF quantization formats, also stored on Xet for fast and efficient access.

Original Model: https://huggingface.co/Trendyol/Trendyol-LLM-7B-chat-v4.1.0

Trendyol LLM v4.1.0

Trendyol LLM v4.1.0 is a generative model based on Trendyol LLM base v4.0 (continued pretraining version of Qwen2.5 7B on 13 billion tokens) model. This is the repository for the chat model.

Keynotes:

  • E-commerce knowledge improved
    • Description generation
    • Attribute extraction
    • Summarization
    • Fashion dialogues
    • Product tagging extraction
    • Category detection
    • Persona interpretation based on actions
    • RAG
    • etc.
  • Improved Turkish language knowledge
  • Function call support (partially completed. It will be completed on the next iterations.)
Downloads last month
295
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for merterbak/Trendyol-LLM-7B-chat-v4.1.0-GGUF

Quantized
(4)
this model