revert readme file
Browse files
README.md
CHANGED
@@ -8,8 +8,7 @@ base_model:
|
|
8 |
pipeline_tag: text-generation
|
9 |
---
|
10 |
|
11 |
-
|
12 |
-
# Ring-lite-linear-preview
|
13 |
|
14 |
<p align="center">
|
15 |
<img src="https://huggingface.co/inclusionAI/Ring-lite-distill-preview/resolve/main/ant-bailing.png" width="100"/>
|
@@ -21,20 +20,21 @@ pipeline_tag: text-generation
|
|
21 |
|
22 |
## Introduction
|
23 |
|
24 |
-
Ring-lite-
|
|
|
25 |
## Model Downloads
|
26 |
|
27 |
<div align="center">
|
28 |
|
29 |
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
|
30 |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
|
31 |
-
| Ring-lite-
|
32 |
|
33 |
</div>
|
34 |
|
35 |
## Evaluation
|
36 |
-
|
37 |
-
|
38 |
|
39 |
<div align="center">
|
40 |
|
@@ -42,38 +42,31 @@ In terms of the evaluation of reasoning ability, Ring-lite-linear-preview achi
|
|
42 |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
|
43 |
| DeepSeek-R1-Distill-Qwen-7B (reported) | 55.5 | 92.8 | 49.1 | 37.6 |
|
44 |
| DeepSeek-R1-Distill-Qwen-7B (reproduce) | 53.2 | 93.7 | 50.4 | 36.5 |
|
45 |
-
| Ring-lite-distill-preview
|
46 |
-
| Ring-lite-linear-preview | 55.0 | 93.8 | 46.5 | 29.8 |
|
47 |
|
48 |
</div>
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
53 |
-
|
54 |
-
<p align="center">
|
55 |
-
<img src="https://modelscope.cn/api/v1/models/inclusionAI/Ring-lite-linear-preview/repo?Revision=master&FilePath=throughput.png&View=true" width="600"/>
|
56 |
-
<p>
|
57 |
-
|
58 |
-
Additionally, to illustrate the advantage in inference speed, we present a comparison between Ring-lite-linear-preview and softmax-attention-based Ring-lite under a batch size of 64 and an output length of 16k (60x speedup). It can be observed that the KV cache usage of Ring-lite-linear-preview is nearly 1/6 that of Ring-lite, and the E2E time is reduced by 27.24% compared with Ring-lite.
|
59 |
-
<p align="center">
|
60 |
-
<img src="https://modelscope.cn/api/v1/models/inclusionAI/Ring-lite-linear-preview/repo?Revision=master&FilePath=inference_speed.gif&View=true" width="600"/>
|
61 |
-
<p>
|
62 |
|
63 |
-
|
|
|
|
|
|
|
64 |
|
65 |
-
|
66 |
-
|
67 |
-
- [flash-linear-attention](https://github.com/fla-org/flash-linear-attention) >= 0.2.1
|
68 |
|
69 |
## Quickstart
|
70 |
|
71 |
-
|
|
|
72 |
|
73 |
```python
|
74 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
75 |
|
76 |
-
model_name = "inclusionAI/Ring-lite-
|
77 |
|
78 |
model = AutoModelForCausalLM.from_pretrained(
|
79 |
model_name,
|
@@ -105,17 +98,14 @@ generated_ids = [
|
|
105 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
106 |
```
|
107 |
|
108 |
-
## Deployment
|
109 |
-
|
110 |
-
Please refer to [Github](TBD)
|
111 |
-
|
112 |
## Dataset
|
|
|
113 |
|
114 |
-
|
115 |
-
|
116 |
|
117 |
## License
|
118 |
This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-lite-distill/blob/main/LICENSE).
|
119 |
|
120 |
## Citation
|
121 |
-
[TBD]
|
|
|
8 |
pipeline_tag: text-generation
|
9 |
---
|
10 |
|
11 |
+
# Ring-lite-distill-preview
|
|
|
12 |
|
13 |
<p align="center">
|
14 |
<img src="https://huggingface.co/inclusionAI/Ring-lite-distill-preview/resolve/main/ant-bailing.png" width="100"/>
|
|
|
20 |
|
21 |
## Introduction
|
22 |
|
23 |
+
Ring-lite-distill-preview is an MoE LLM provided and open-sourced by InclusionAI, which has 16.8B parameters with 2.75B activated parameters. It was fine-tuned from [Ling-lite](https://modelscope.cn/models/inclusionAI/Ling-lite) using extensive reasoning-focused instruction data. This model delivers performance comparable to DeepSeek-R1-Distill-Qwen-7B on reasoning benchmarks while achieving better results on general benchmarks, especially superior performance on function-calling evaluation benchmarks (e.g., TEval, BFCl_v2) and instruction-following benchmarks (e.g., IFEval). This demonstrates that Ring-lite-distill is a more balanced and versatile model. Additionaly, it maintains competitive latency and throughput compared to other reasoning LLMs of similar size.
|
24 |
+
|
25 |
## Model Downloads
|
26 |
|
27 |
<div align="center">
|
28 |
|
29 |
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
|
30 |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
|
31 |
+
| Ring-lite-distill-preview | 16.8B | 2.75B | 64K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-lite-distill) |
|
32 |
|
33 |
</div>
|
34 |
|
35 |
## Evaluation
|
36 |
+
In order to fully evaluate the model's performance, we examined Ring-lite-distill-preview in terms of both reasoning ability and general ability.
|
37 |
+
### Reasoning ability
|
38 |
|
39 |
<div align="center">
|
40 |
|
|
|
42 |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
|
43 |
| DeepSeek-R1-Distill-Qwen-7B (reported) | 55.5 | 92.8 | 49.1 | 37.6 |
|
44 |
| DeepSeek-R1-Distill-Qwen-7B (reproduce) | 53.2 | 93.7 | 50.4 | 36.5 |
|
45 |
+
| Ring-lite-distill-preview | 56.3 | 93.7 | 46.2 | 31.9 |
|
|
|
46 |
|
47 |
</div>
|
48 |
|
49 |
+
### General ability
|
50 |
|
51 |
+
<div align="center">
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
+
| **Model** | **IFEval** | **T-eval** | **BFCL_v2** | **MMLU** |
|
54 |
+
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
|
55 |
+
| DeepSeek-R1-Distill-Qwen-7B (reproduce) | 39.3 | 26.9 | 38.9 | 44.1 |
|
56 |
+
| Ring-lite-distill-preview | 75.3 | 81.3 | 63.0 | 63.3 |
|
57 |
|
58 |
+
</div>
|
59 |
+
More details will be reported in our technical report. [TBD]
|
|
|
60 |
|
61 |
## Quickstart
|
62 |
|
63 |
+
### 🤗 Hugging Face Transformers
|
64 |
+
Here is a code snippet to show you how to use the chat model with `transformers`:
|
65 |
|
66 |
```python
|
67 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
68 |
|
69 |
+
model_name = "inclusionAI/Ring-lite-distill-preview"
|
70 |
|
71 |
model = AutoModelForCausalLM.from_pretrained(
|
72 |
model_name,
|
|
|
98 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
99 |
```
|
100 |
|
|
|
|
|
|
|
|
|
101 |
## Dataset
|
102 |
+
The training data of Ring-lite-distill-preview will be released soon.
|
103 |
|
104 |
+
## Deployment
|
105 |
+
Please refer to [GitHub](https://github.com/inclusionAI/Ring/blob/main/README.md)
|
106 |
|
107 |
## License
|
108 |
This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-lite-distill/blob/main/LICENSE).
|
109 |
|
110 |
## Citation
|
111 |
+
[TBD]
|