yajunvicky commited on
Commit
989c827
·
verified ·
1 Parent(s): 750247f

Initial model upload

Browse files
Files changed (4) hide show
  1. .DS_Store +0 -0
  2. README.md +166 -0
  3. configuration.json +4 -0
  4. image/group.png +0 -0
.DS_Store ADDED
Binary file (6.15 kB). View file
 
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ DeepSeek-R1-INT4-FlagOS-Iluvatar provides an all-in-one deployment solution, enabling execution of DeepSeek-R1-INT4 on Iluvatar GPUs. As the first-generation release for the ILUVATAR-BI150, this package delivers three key features:
4
+
5
+ 1. Comprehensive Integration:
6
+ - Integrated with FlagScale (https://github.com/FlagOpen/FlagScale).
7
+ - Open-source inference execution code, preconfigured with all necessary software and hardware settings.
8
+ - Pre-built Docker image for rapid deployment on ILUVATAR-BI150.
9
+ 2. Consistency Validation:
10
+ - Evaluation tests verifying consistency of results between the official and ours.
11
+
12
+ # Technical Summary
13
+
14
+ ## Serving Engine
15
+
16
+ We use FlagScale as the serving engine to improve the portability of distributed inference.
17
+
18
+ FlagScale is an end-to-end framework for large models across multiple chips, maximizing computational resource efficiency while ensuring model effectiveness. It ensures both ease of use and high performance for users when deploying models across different chip architectures:
19
+
20
+ - One-Click Service Deployment: FlagScale provides a unified and simple command execution mechanism, allowing users to fast deploy services seamlessly across various hardware platforms using the same command. This significantly reduces the entry barrier and enhances user experience.
21
+ - Automated Deployment Optimization: FlagScale automatically optimizes distributed parallel strategies based on the computational capabilities of different AI chips, ensuring optimal resource allocation and efficient utilization, thereby improving overall deployment performance.
22
+ - Automatic Operator Library Switching: Leveraging FlagScale's unified Runner mechanism and deep integration with FlagGems, users can seamlessly switch to the FlagGems operator library for inference by simply adding environment variables in the configuration file.
23
+
24
+ ## Triton Support
25
+
26
+ We validate the execution of DeepSeek-R1-INT4 model with a Triton-based operator library as a PyTorch alternative.
27
+
28
+ We use a variety of Triton-implemented operation kernels—approximately 70%—to run the DeepSeek-R1-INT4 model. These kernels come from two main sources:
29
+
30
+ - Most Triton kernels are provided by FlagGems (https://github.com/FlagOpen/FlagGems). You can enable FlagGems kernels by setting the environment variable USE_FLAGGEMS. For more details, please refer to the "How to Run Locally" section.
31
+
32
+ - Also included are Triton kernels from vLLM, including fused MoE.
33
+
34
+ # Introduction
35
+
36
+ DeepSeek-R1-INT4-FlagOS-Iluvatar provides an all-in-one deployment solution, enabling execution of DeepSeek-R1-INT4 on Iluvatar GPUs. As the first-generation release for the ILUVATAR-BI150, this package delivers three key features:
37
+
38
+ 1. Comprehensive Integration:
39
+ - Integrated with FlagScale (https://github.com/FlagOpen/FlagScale).
40
+ - Open-source inference execution code, preconfigured with all necessary software and hardware settings.
41
+ - Pre-built Docker image for rapid deployment on ILUVATAR-BI150.
42
+ 2. Consistency Validation:
43
+ - Evaluation tests verifying consistency of results between the official and ours.
44
+
45
+ # Technical Summary
46
+
47
+ ## Serving Engine
48
+
49
+ We use FlagScale as the serving engine to improve the portability of distributed inference.
50
+
51
+ FlagScale is an end-to-end framework for large models across multiple chips, maximizing computational resource efficiency while ensuring model effectiveness. It ensures both ease of use and high performance for users when deploying models across different chip architectures:
52
+
53
+ - One-Click Service Deployment: FlagScale provides a unified and simple command execution mechanism, allowing users to fast deploy services seamlessly across various hardware platforms using the same command. This significantly reduces the entry barrier and enhances user experience.
54
+ - Automated Deployment Optimization: FlagScale automatically optimizes distributed parallel strategies based on the computational capabilities of different AI chips, ensuring optimal resource allocation and efficient utilization, thereby improving overall deployment performance.
55
+ - Automatic Operator Library Switching: Leveraging FlagScale's unified Runner mechanism and deep integration with FlagGems, users can seamlessly switch to the FlagGems operator library for inference by simply adding environment variables in the configuration file.
56
+
57
+ ## Triton Support
58
+
59
+ We validate the execution of DeepSeek-R1-INT4 model with a Triton-based operator library as a PyTorch alternative.
60
+
61
+ We use a variety of Triton-implemented operation kernels—approximately 70%—to run the DeepSeek-R1-INT4 model. These kernels come from two main sources:
62
+
63
+ - Most Triton kernels are provided by FlagGems (https://github.com/FlagOpen/FlagGems). You can enable FlagGems kernels by setting the environment variable USE_FLAGGEMS. For more details, please refer to the "How to Run Locally" section.
64
+
65
+ - Also included are Triton kernels from vLLM, including fused MoE.
66
+
67
+ # Bundle Download
68
+
69
+ Requested by Iluvatar, the file of docker image and model files should be applied by email.
70
+
71
+ | | Usage | Cambricon |
72
+ | ----------- | ------------------------------------------------------ | ------------------------------------------------------------ |
73
+ | Basic Image | basic software environment that supports model running | services@iluvatar.comContact by email,please indicate the unit/contact person/contact information/equipment source/specific requirements |
74
+
75
+ # Evaluation Results
76
+
77
+ ## Benchmark Result
78
+
79
+ | Metrics | DeepSeek-R1-INT4-H100-CUDA | DeepSeek-R1-INT4-FlagOS-Iluvatar |
80
+ |:-------------------|-----------------------|--------------------------|
81
+ | GSM8K (EM) | 95.75 | 95.07 |
82
+ | MMLU (Acc.) | 85.34 | 85.02 |
83
+ | CEVAL | 89.00 | 88.78 |
84
+ | AIME 2024 (Pass@1) | 76.67 | 76.67(±0.67) |
85
+ | GPQA-Diamond (Pass@1) | 70.20 | 69.7 |
86
+ | MATH-500 (pass@1) | 93.20 | 94.2 |
87
+
88
+
89
+ # How to Run Locally
90
+ ## 📌 Getting Started
91
+ ### Download open-source weights
92
+
93
+ ```bash
94
+
95
+ pip install modelscope
96
+ modelscope download --model <Model Name> --local_dir <Cache Path>
97
+
98
+ ```
99
+
100
+ ### Download the FlagOS image
101
+
102
+ ```bash
103
+ docker pull baai_v4
104
+ ```
105
+
106
+ ### Start the inference service
107
+
108
+ ```bash
109
+ docker run --shm-size="32g" -itd -v /dev:/dev -v /usr/src/:/usr/src -v /lib/modules/:/lib/modules -v /home:/home -v /mnt/share/:/data1 --privileged --cap-add=ALL --pid=host --net=host --name baai_v4 baai:v4
110
+
111
+ docker exec -it baai_v4 bash
112
+ ```
113
+
114
+ ### Download FlagScale and unpatch the vendor's code to build vllm
115
+
116
+ ```bash
117
+ git clone https://ghfast.top/https://github.com/FlagOpen/FlagScale.git
118
+ cd FlagScale
119
+ git checkout 86ac9eef0bd8d0d0cb29586117ea182f38269c5b
120
+ # unpatch
121
+ python3 tools/patch/unpatch.py --device-type bi_V150 --commit-id 758e33e0 --key-path ~/flagscale_0402_key --dir build
122
+ NOTE: need to set git config
123
+ # compile vllm
124
+ cd build/bi_V150/FlagScale/vllm
125
+ bash clean_vllm.sh; bash build_vllm.sh; bash install_vllm.sh
126
+ cd ..
127
+ ```
128
+
129
+ ### Serve
130
+
131
+ ```bash
132
+ # config the deepseek-r1-int4 yaml
133
+ FlagScale/
134
+ ├── examples/
135
+ │ └── deepseek_r1_int4/
136
+ │ └── conf/
137
+ │ └── hostfile.txt #Modify local IP
138
+ │ └── config_deepseek_r1_int4.yaml #Modify container name
139
+ │ └── serve/
140
+ │ └── deepseek_r1_int4.yaml # Add batch limit: max-num-seqs: 4
141
+
142
+ # compile flagscale
143
+ pip install .
144
+ # start server
145
+ flagscale serve deepseek_r1_int4
146
+ ```
147
+
148
+ # Contributing
149
+
150
+ We warmly welcome global developers to join us:
151
+ 1. Submit Issues to report problems
152
+ 2. Create Pull Requests to contribute code
153
+ 3. Improve technical documentation
154
+ 4. Expand hardware adaptation support
155
+
156
+ # 📞 Contact Us
157
+
158
+ Scan the QR code below to add our WeChat group
159
+ send "FlagRelease"
160
+
161
+ ![WeChat](image/group.png)
162
+
163
+ # License
164
+
165
+ This project and related model weights are licensed under the MIT License.
166
+
configuration.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "framework": "Pytorch",
3
+ "task": "any-to-any"
4
+ }
image/group.png ADDED