--- license: mit tags: - LiteRT base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B --- # litert-community/DeepSeek-R1-Distill-Qwen-1.5B This model was converted to LiteRT (aka TFLite) format from [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) using [Google AI Edge Torch](https://github.com/google-ai-edge/ai-edge-torch). ## Run the model in colab [](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/DeepSeek-R1-Distill-Qwen-1.5B/blob/main/deepseek%20tflite.ipynb) ## Run the model on Android Please follow the [instructions](https://github.com/google-ai-edge/mediapipe-samples/blob/main/examples/llm_inference/android/README.md). ## Benchmarking results Note that all benchmark stats are from a Samsung S24 Ultra.
Model | DeepSeek-R1-Distill-Qwen-1.5B (Int8 quantized) | |
---|---|---|
Params | 1.78 B | |
Prefill 512 tokens | Decode 128 tokens | |
LiteRT tk/s (XNNPACK, 4 threads) | 260.95 | 23.126 |
GGML tk/s (CPU, 4 threads) | 64.66 | 23.85 |