license: mit | |
tags: | |
- LiteRT | |
base_model: | |
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B | |
# litert-community/DeepSeek-R1-Distill-Qwen-1.5B | |
This model was converted to LiteRT (aka TFLite) format from [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) using [Google AI Edge Torch](https://github.com/google-ai-edge/ai-edge-torch). | |
## Run the model in colab | |
[](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/DeepSeek-R1-Distill-Qwen-1.5B/blob/main/deepseek%20tflite.ipynb) | |
## Run the model on Android | |
Please follow the [instructions](https://github.com/google-ai-edge/mediapipe-samples/blob/main/examples/llm_inference/android/README.md). | |
## Benchmarking results | |
Note that all benchmark stats are from a Samsung S24 Ultra. | |
<table border="1"> | |
<tr> | |
<th>Model</th> | |
<td colspan="2">DeepSeek-R1-Distill-Qwen-1.5B (Int8 quantized)</td> | |
</tr> | |
<tr> | |
<th>Params</th> | |
<td colspan="2">1.78 B</td> | |
</tr> | |
<tr> | |
<th></th> | |
<td><b>Prefill 512 tokens</b></td><td><b>Decode 128 tokens</b></td> | |
</tr> | |
<tr> | |
<th>LiteRT tk/s (XNNPACK, 4 threads)</th> | |
<td>260.95</td><td>23.126</td> | |
</tr> | |
<tr> | |
<th>GGML tk/s (CPU, 4 threads)</th> | |
<td>64.66</td><td>23.85</td> | |
</tr> | |
</table> | |