XinxingYang commited on
Commit
50a1e69
·
verified ·
1 Parent(s): dcf1105

revert readme file

Browse files
Files changed (1) hide show
  1. README.md +22 -32
README.md CHANGED
@@ -8,8 +8,7 @@ base_model:
8
  pipeline_tag: text-generation
9
  ---
10
 
11
-
12
- # Ring-lite-linear-preview
13
 
14
  <p align="center">
15
  <img src="https://huggingface.co/inclusionAI/Ring-lite-distill-preview/resolve/main/ant-bailing.png" width="100"/>
@@ -21,20 +20,21 @@ pipeline_tag: text-generation
21
 
22
  ## Introduction
23
 
24
- Ring-lite-linear-preview is a hybrid-linear MoE LLM provided and open-sourced by InclusionAI, which has 17.1B parameters with 3.0B activated parameters. It is a long reasoning model based on hybrid-linear attention, achieving near-linear computational complexity and near-constant space complexity during inference. This model was converted from [Ling-lite-0220](https://huggingface.co/models/inclusionAI/Ling-lite), which adopts the softmax attention-based architecture. It matches the performance of DeepSeek-R1-Distill-Qwen-7B on standardized reasoning benchmarks while substantially reducing computational overhead in both training and inference phases. In certain generation speed tests based on vLLM, we observed that the throughput was more than doubled compared to softmax attention models of the same scale (e.g., Ling-lite). To the best of our knowledge, it is the first open-source hybrid-linear reasoning language model.
 
25
  ## Model Downloads
26
 
27
  <div align="center">
28
 
29
  | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
30
  | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
31
- | Ring-lite-linear-preview | 17.1B | 3.0B | 64K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-lite-distill)|
32
 
33
  </div>
34
 
35
  ## Evaluation
36
-
37
- In terms of the evaluation of reasoning ability, Ring-lite-linear-preview achieves 55.0 on AIME24 and 93.8 on MATH-500.
38
 
39
  <div align="center">
40
 
@@ -42,38 +42,31 @@ In terms of the evaluation of reasoning ability, Ring-lite-linear-preview achi
42
  | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
43
  | DeepSeek-R1-Distill-Qwen-7B (reported) | 55.5 | 92.8 | 49.1 | 37.6 |
44
  | DeepSeek-R1-Distill-Qwen-7B (reproduce) | 53.2 | 93.7 | 50.4 | 36.5 |
45
- | Ring-lite-distill-preview-Stage-1 | 54.2 | 93.5 | 47.5 | 32.9 |
46
- | Ring-lite-linear-preview | 55.0 | 93.8 | 46.5 | 29.8 |
47
 
48
  </div>
49
 
50
- ## Inference Speed
51
 
52
- To evaluate the generation throughput, we deploy Ring-lite-linear and the softmax-attention-based Ring-lite based on vLLM on a single NVIDIA A100 GPU. Specifically, the input sequence length is fixed to 1. The end-to-end (E2E) generation time required for generating output sequences of varying lengths is illustrated below. It is shown in the figure that at 32k output length, Ring-lite-linear-preview achieves 2.2× throughput of Ring-lite.
53
-
54
- <p align="center">
55
- <img src="https://modelscope.cn/api/v1/models/inclusionAI/Ring-lite-linear-preview/repo?Revision=master&FilePath=throughput.png&View=true" width="600"/>
56
- <p>
57
-
58
- Additionally, to illustrate the advantage in inference speed, we present a comparison between Ring-lite-linear-preview and softmax-attention-based Ring-lite under a batch size of 64 and an output length of 16k (60x speedup). It can be observed that the KV cache usage of Ring-lite-linear-preview is nearly 1/6 that of Ring-lite, and the E2E time is reduced by 27.24% compared with Ring-lite.
59
- <p align="center">
60
- <img src="https://modelscope.cn/api/v1/models/inclusionAI/Ring-lite-linear-preview/repo?Revision=master&FilePath=inference_speed.gif&View=true" width="600"/>
61
- <p>
62
 
63
- More details will be reported in our technical report [TBD]
 
 
 
64
 
65
- ## Requirements
66
- - [transformers](https://github.com/huggingface/transformers) >= 4.48.3
67
- - [flash-linear-attention](https://github.com/fla-org/flash-linear-attention) >= 0.2.1
68
 
69
  ## Quickstart
70
 
71
- Here is a code snippet to show you how to use the chat model with `modelscope`:
 
72
 
73
  ```python
74
  from transformers import AutoModelForCausalLM, AutoTokenizer
75
 
76
- model_name = "inclusionAI/Ring-lite-linear-preview"
77
 
78
  model = AutoModelForCausalLM.from_pretrained(
79
  model_name,
@@ -105,17 +98,14 @@ generated_ids = [
105
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
106
  ```
107
 
108
- ## Deployment
109
-
110
- Please refer to [Github](TBD)
111
-
112
  ## Dataset
 
113
 
114
- The long reasoning sft data: [Ring-lite-distill-preview-sft-data](https://huggingface.co/datasets/inclusionAI/Ring-lite-distill-preview-sft-data)
115
-
116
 
117
  ## License
118
  This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-lite-distill/blob/main/LICENSE).
119
 
120
  ## Citation
121
- [TBD]
 
8
  pipeline_tag: text-generation
9
  ---
10
 
11
+ # Ring-lite-distill-preview
 
12
 
13
  <p align="center">
14
  <img src="https://huggingface.co/inclusionAI/Ring-lite-distill-preview/resolve/main/ant-bailing.png" width="100"/>
 
20
 
21
  ## Introduction
22
 
23
+ Ring-lite-distill-preview is an MoE LLM provided and open-sourced by InclusionAI, which has 16.8B parameters with 2.75B activated parameters. It was fine-tuned from [Ling-lite](https://modelscope.cn/models/inclusionAI/Ling-lite) using extensive reasoning-focused instruction data. This model delivers performance comparable to DeepSeek-R1-Distill-Qwen-7B on reasoning benchmarks while achieving better results on general benchmarks, especially superior performance on function-calling evaluation benchmarks (e.g., TEval, BFCl_v2) and instruction-following benchmarks (e.g., IFEval). This demonstrates that Ring-lite-distill is a more balanced and versatile model. Additionaly, it maintains competitive latency and throughput compared to other reasoning LLMs of similar size.
24
+
25
  ## Model Downloads
26
 
27
  <div align="center">
28
 
29
  | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
30
  | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
31
+ | Ring-lite-distill-preview | 16.8B | 2.75B | 64K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-lite-distill) |
32
 
33
  </div>
34
 
35
  ## Evaluation
36
+ In order to fully evaluate the model's performance, we examined Ring-lite-distill-preview in terms of both reasoning ability and general ability.
37
+ ### Reasoning ability
38
 
39
  <div align="center">
40
 
 
42
  | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
43
  | DeepSeek-R1-Distill-Qwen-7B (reported) | 55.5 | 92.8 | 49.1 | 37.6 |
44
  | DeepSeek-R1-Distill-Qwen-7B (reproduce) | 53.2 | 93.7 | 50.4 | 36.5 |
45
+ | Ring-lite-distill-preview | 56.3 | 93.7 | 46.2 | 31.9 |
 
46
 
47
  </div>
48
 
49
+ ### General ability
50
 
51
+ <div align="center">
 
 
 
 
 
 
 
 
 
52
 
53
+ | **Model** | **IFEval** | **T-eval** | **BFCL_v2** | **MMLU** |
54
+ | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
55
+ | DeepSeek-R1-Distill-Qwen-7B (reproduce) | 39.3 | 26.9 | 38.9 | 44.1 |
56
+ | Ring-lite-distill-preview | 75.3 | 81.3 | 63.0 | 63.3 |
57
 
58
+ </div>
59
+ More details will be reported in our technical report. [TBD]
 
60
 
61
  ## Quickstart
62
 
63
+ ### 🤗 Hugging Face Transformers
64
+ Here is a code snippet to show you how to use the chat model with `transformers`:
65
 
66
  ```python
67
  from transformers import AutoModelForCausalLM, AutoTokenizer
68
 
69
+ model_name = "inclusionAI/Ring-lite-distill-preview"
70
 
71
  model = AutoModelForCausalLM.from_pretrained(
72
  model_name,
 
98
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
99
  ```
100
 
 
 
 
 
101
  ## Dataset
102
+ The training data of Ring-lite-distill-preview will be released soon.
103
 
104
+ ## Deployment
105
+ Please refer to [GitHub](https://github.com/inclusionAI/Ring/blob/main/README.md)
106
 
107
  ## License
108
  This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-lite-distill/blob/main/LICENSE).
109
 
110
  ## Citation
111
+ [TBD]