metadata
license: mit
pipeline_tag: text-generation
language:
- ja
tags:
- japanese
- qwen2
- gguf
base_model:
- cyberagent/DeepSeek-R1-Distill-Qwen-14B-Japanese
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
DeepSeek-R1-Distill-Qwen-14B-Japanese GGUF
Model Description
This repository provides a GGUF quantized version of the cyberagent/DeepSeek-R1-Distill-Qwen-14B-Japanese model from CyberAgent.
Choosing the Right Quantized Version
The choice of which quantized version to use depends your avilable VRAM:
= 8GB VRAM:
DeepSeek-R1-Distill-Qwen-14B-Japanese-IQ3_XS.gguf
= 12GB VRAM:
DeepSeek-R1-Distill-Qwen-14B-Japanese-IQ4_XS.gguf
= 12GB VRAM:
DeepSeek-R1-Distill-Qwen-14B-Japanese-Q4_K_M.gguf
= 16GB VRAM:
DeepSeek-R1-Distill-Qwen-14B-Japanese-Q6_K.gguf