LiangJiang commited on
Commit
f990cc4
·
verified ·
1 Parent(s): e8aa6e2

Update README.md

Browse files

# Ring-lite-distill

<p align="center">
<img src="https://huggingface.co/inclusionAI/Ling-lite/resolve/main/ant-bailing.png" width="100"/>
<p>

<p align="center">
🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>
<p>

## Introduction

Ring-lite-distill is an MoE LLM provided and open-sourced by InclusionAI, which has 16.8B parameters with 2.75B activated parameters. It was fine-tuned from [Ling-lite](https://modelscope.cn/models/inclusionAI/Ling-lite) using extensive reasoning-focused instruction data. This model delivers performance comparable to DeepSeek-R1-Distill-Qwen-7B on reasoning benchmarks while achieving better results on general benchmarks, especially superior performance on function-calling evaluation benchmarks (e.g., TEval, BFCl_v2) and instruction-following benchmarks (e.g., IFEval). This demonstrates that Ring-lite-distill is a more balanced and versatile model. Additionaly, it maintains competitive latency and throughput compared to other reasoning LLMs of similar size.

## Model Downloads

<div align="center">

| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
| Ring-lite-distill | 16.8B | 2.75B | 64K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-lite-distill)|

</div>

## Evaluation
In order to fully evaluate the model's performance, we examined Ring-lite-distill in terms of both reasoning ability and general ability.
### Reasoning ability

<div align="center">

| **Model** | **AIME24** | **MATH-500** | **GPQA-diamond** | **LiveCodeBench** |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
| DeepSeek-R1-Distill-Qwen-7B (reported) | 55.5 | 92.8 | 49.1 | 37.6 |
| DeepSeek-R1-Distill-Qwen-7B (reproduce) | 53.2 | 93.7 | 50.4 | 36.5 |
| Ring-lite-distill | 54.2 | 93.0 | 47.5 | 32.3 | 56.7 |

</div>

### General ability

<div align="center">

| **Model** | **IFEval** | **Teval_zh** | **Teval_en** | **BFCL_v2** | **MMLU** |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: | :----------: |
| DeepSeek-R1-Distill-Qwen-7B (reproduce) | 32.3 | 36.8 | 26.9 | 38.9 | 44.1 |
| Ring-lite-distill | 74.5 | 78.3 | 81.4 | 63.2 | 63.1 | 85.97 | 80.00 | 58.7 | 69.4 |

</div>
More details will be reported in our technical report [TBD]

## Quickstart

### 🤗 Hugging Face Transformers

Here is a code snippet to show you how to use the chat model with `transformers`:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "inclusionAI/Ring-lite-distill"

model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```

## Dataset
The training data of Ring-lite-distill will be released soon.

## Deployment
Please refer to [Github](https://github.com/inclusionAI/Ring/blob/main/README.md)

## License
This code repository is licensed under [the MIT License]([https://modelscope.cn/models/inclusionAI/Ling-lite/file/view/master?fileName=LICENCE&status=0](https://huggingface.co/inclusionAI/Ring-lite-distill/blob/main/LICENSE)).

## Citation
[TBD]

Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -1,3 +1,9 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - zh
5
+ - en
6
+ base_model:
7
+ - inclusionAI/Ling-lite
8
+ pipeline_tag: text-generation
9
+ ---