File size: 1,005 Bytes
489d1a0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
license: mit
pipeline_tag: text-generation
language:
- ja
tags:
- japanese
- qwen2
- gguf
base_model:
- cyberagent/DeepSeek-R1-Distill-Qwen-14B-Japanese
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
---
# DeepSeek-R1-Distill-Qwen-14B-Japanese GGUF
## Model Description
This repository provides a GGUF quantized version of the [cyberagent/DeepSeek-R1-Distill-Qwen-14B-Japanese](https://huggingface.co/cyberagent/DeepSeek-R1-Distill-Qwen-14B-Japanese) model from [CyberAgent](https://cyberagent.ai).
## Choosing the Right Quantized Version
The choice of which quantized version to use depends your avilable VRAM:
- >= 8GB VRAM: `DeepSeek-R1-Distill-Qwen-14B-Japanese-IQ3_XS.gguf`
- >= 12GB VRAM: `DeepSeek-R1-Distill-Qwen-14B-Japanese-IQ4_XS.gguf`
- >= 12GB VRAM: `DeepSeek-R1-Distill-Qwen-14B-Japanese-Q4_K_M.gguf`
- >= 16GB VRAM: `DeepSeek-R1-Distill-Qwen-14B-Japanese-Q6_K.gguf`
## License
[MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE)
|