xcjthu commited on
Commit
026e467
·
verified ·
1 Parent(s): fc9b50f

update the link for technical report

Browse files
Files changed (1) hide show
  1. README.md +119 -7
README.md CHANGED
@@ -1,21 +1,18 @@
1
  ---
 
2
  language:
3
  - zh
4
  - en
5
- library_name: transformers
6
- license: apache-2.0
7
  pipeline_tag: text-generation
 
8
  ---
9
-
10
- MiniCPM4-8B is a highly efficient large language model (LLM) designed explicitly for end-side devices. It achieves this efficiency through systematic innovation in model architecture, training data, training algorithms, and inference systems. The details can be found in [MiniCPM4: Ultra-Efficient LLMs on End Devices](https://huggingface.co/papers/2506.07900).
11
-
12
  <div align="center">
13
  <img src="https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm_logo.png?raw=true" width="500em" ></img>
14
  </div>
15
 
16
  <p align="center">
17
  <a href="https://github.com/OpenBMB/MiniCPM/" target="_blank">GitHub Repo</a> |
18
- <a href="https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf" target="_blank">Technical Report</a>
19
  </p>
20
  <p align="center">
21
  👋 Join us on <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
@@ -83,6 +80,13 @@ MiniCPM4 natively supports context lengths of up to 32,768 tokens. To reproduce
83
  }
84
  ```
85
 
 
 
 
 
 
 
 
86
  ### Inference with Transformers
87
  ```python
88
  from transformers import AutoModelForCausalLM, AutoTokenizer
@@ -195,4 +199,112 @@ Then you can use the chat interface by running the following command:
195
  ```python
196
  import openai
197
 
198
- client =
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
  language:
4
  - zh
5
  - en
 
 
6
  pipeline_tag: text-generation
7
+ library_name: transformers
8
  ---
 
 
 
9
  <div align="center">
10
  <img src="https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm_logo.png?raw=true" width="500em" ></img>
11
  </div>
12
 
13
  <p align="center">
14
  <a href="https://github.com/OpenBMB/MiniCPM/" target="_blank">GitHub Repo</a> |
15
+ <a href="https://arxiv.org/abs/2506.07900" target="_blank">Technical Report</a>
16
  </p>
17
  <p align="center">
18
  👋 Join us on <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
 
80
  }
81
  ```
82
 
83
+ After modification, you can run the following command to reproduce the long-context acceleration effect (the script will automatically download the model weights from HuggingFace)
84
+ ```bash
85
+ python3 tests/test_generate.py
86
+ ```
87
+
88
+ For more details about CPM.cu, please refer to [the repo CPM.cu](https://github.com/OpenBMB/cpm.cu).
89
+
90
  ### Inference with Transformers
91
  ```python
92
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
199
  ```python
200
  import openai
201
 
202
+ client = openai.Client(base_url=f"http://localhost:30000/v1", api_key="None")
203
+
204
+ response = client.chat.completions.create(
205
+ model="openbmb/MiniCPM4-8B",
206
+ messages=[
207
+ {"role": "user", "content": "Write an article about Artificial Intelligence."},
208
+ ],
209
+ temperature=0.7,
210
+ max_tokens=1024,
211
+ )
212
+
213
+ print(response.choices[0].message.content)
214
+ ```
215
+
216
+ ### Inference with [vLLM](https://github.com/vllm-project/vllm)
217
+ For now, you need to install the latest version of vLLM.
218
+ ```
219
+ pip install -U vllm \
220
+ --pre \
221
+ --extra-index-url https://wheels.vllm.ai/nightly
222
+ ```
223
+
224
+ Then you can inference MiniCPM4-8B with vLLM:
225
+ ```python
226
+ from transformers import AutoTokenizer
227
+ from vllm import LLM, SamplingParams
228
+
229
+ model_name = "openbmb/MiniCPM4-8B"
230
+ prompt = [{"role": "user", "content": "Please recommend 5 tourist attractions in Beijing. "}]
231
+
232
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
233
+ input_text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
234
+
235
+ llm = LLM(
236
+ model=model_name,
237
+ trust_remote_code=True,
238
+ max_num_batched_tokens=32768,
239
+ dtype="bfloat16",
240
+ gpu_memory_utilization=0.8,
241
+ )
242
+ sampling_params = SamplingParams(top_p=0.7, temperature=0.7, max_tokens=1024, repetition_penalty=1.02)
243
+
244
+ outputs = llm.generate(prompts=input_text, sampling_params=sampling_params)
245
+
246
+ print(outputs[0].outputs[0].text)
247
+ ```
248
+
249
+ Also, you can start the inference server by running the following command:
250
+ > **Note**: In vLLM's chat API, `add_special_tokens` is `False` by default. This means important special tokens—such as the beginning-of-sequence (BOS) token—will not be added automatically. To ensure the input prompt is correctly formatted for the model, you should explicitly set `extra_body={"add_special_tokens": True}`.
251
+
252
+ ```bash
253
+ vllm serve openbmb/MiniCPM4-8B
254
+ ```
255
+
256
+ Then you can use the chat interface by running the following code:
257
+
258
+ ```python
259
+ import openai
260
+
261
+ client = openai.Client(base_url="http://localhost:8000/v1", api_key="EMPTY")
262
+
263
+ response = client.chat.completions.create(
264
+ model="openbmb/MiniCPM4-8B",
265
+ messages=[
266
+ {"role": "user", "content": "Write an article about Artificial Intelligence."},
267
+ ],
268
+ temperature=0.7,
269
+ max_tokens=1024,
270
+ extra_body=dict(add_special_tokens=True), # Ensures special tokens are added for chat template
271
+
272
+ )
273
+
274
+ print(response.choices[0].message.content)
275
+ ```
276
+
277
+ ## Evaluation Results
278
+ On two typical end-side chips, Jetson AGX Orin and RTX 4090, MiniCPM4 demonstrates significantly faster processing speed compared to similar-size models in long text processing tasks. As text length increases, MiniCPM4's efficiency advantage becomes more pronounced. On the Jetson AGX Orin platform, compared to Qwen3-8B, MiniCPM4 achieves approximately 7x decoding speed improvement.
279
+
280
+ ![benchmark](https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm4/efficiency.png?raw=true)
281
+
282
+ #### Comprehensive Evaluation
283
+ MiniCPM4 launches end-side versions with 8B and 0.5B parameter scales, both achieving best-in-class performance in their respective categories.
284
+
285
+ ![benchmark](https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm4/benchmark.png?raw=true)
286
+
287
+ #### Long Text Evaluation
288
+ MiniCPM4 is pre-trained on 32K long texts and achieves length extension through YaRN technology. In the 128K long text needle-in-a-haystack task, MiniCPM4 demonstrates outstanding performance.
289
+
290
+ ![long-niah](https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm4/128k-niah.png?raw=true)
291
+
292
+ ## Statement
293
+ - As a language model, MiniCPM generates content by learning from a vast amount of text.
294
+ - However, it does not possess the ability to comprehend or express personal opinions or value judgments.
295
+ - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers.
296
+ - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
297
+
298
+ ## LICENSE
299
+ - This repository and MiniCPM models are released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
300
+
301
+ ## Citation
302
+ - Please cite our [paper](https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf) if you find our work valuable.
303
+
304
+ ```bibtex
305
+ @article{minicpm4,
306
+ title={{MiniCPM4}: Ultra-Efficient LLMs on End Devices},
307
+ author={MiniCPM Team},
308
+ year={2025}
309
+ }
310
+ ```