DeepSeek-R1-Distill-Qwen-14B-Japanese GGUF

Model Description

This repository provides a GGUF quantized version of the cyberagent/DeepSeek-R1-Distill-Qwen-14B-Japanese model from CyberAgent.

Choosing the Right Quantized Version

The choice of which quantized version to use depends your avilable VRAM:

  • = 8GB VRAM: DeepSeek-R1-Distill-Qwen-14B-Japanese-IQ3_XS.gguf

  • = 12GB VRAM: DeepSeek-R1-Distill-Qwen-14B-Japanese-IQ4_XS.gguf

  • = 12GB VRAM: DeepSeek-R1-Distill-Qwen-14B-Japanese-Q4_K_M.gguf

  • = 16GB VRAM: DeepSeek-R1-Distill-Qwen-14B-Japanese-Q6_K.gguf

License

MIT License

Downloads last month
92
GGUF
Model size
14.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for aplulu/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-GGUF

Dataset used to train aplulu/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-GGUF