Text Generation
Safetensors
English
Chinese
plm
conversational
custom_code
UCASLuoyang commited on
Commit
d32df61
·
verified ·
1 Parent(s): a7d100c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -3
README.md CHANGED
@@ -1,3 +1,110 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: PLM-Team/PLM-1.8B-Instruct
3
+ language:
4
+ - en
5
+ - zh
6
+ library_name: transformers
7
+ license: apache-2.0
8
+ quantized_by: PLM-Team
9
+ pipeline_tag: text-generation
10
+ ---
11
+ <center>
12
+ <img src="https://www.cdeng.net/plm/plm_logo.png" alt="plm-logo" width="200"/>
13
+ <h2>🖲️ PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing</h2>
14
+ <a href='https://www.project-plm.com/'>👉 Project PLM Website</a>
15
+ </center>
16
+
17
+ <center>
18
+
19
+ ||||||||
20
+ |:-:|:-:|:-:|:-:|:-:|:-:|:-:|
21
+ |<a href='https://arxiv.org/abs/2503.12167'><img src='https://img.shields.io/badge/Paper-ArXiv-C71585'></a>|<a href='https://huggingface.co/PLM-Team/PLM-1.8B-Base'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging Face-Base-red'></a>|<a href='https://huggingface.co/PLM-Team/PLM-1.8B-Instruct'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging Face-Instruct-red'></a>|<a href='https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging Face-gguf-red'></a>|<a href='https://huggingface.co/datasets/plm-team/scots'><img src='https://img.shields.io/badge/Data-plm%20mix-4169E1'></img></a>|<a><img src="https://img.shields.io/github/stars/plm-team/PLM"></a>|
22
+
23
+ </center>
24
+
25
+ ---
26
+
27
+ The PLM (Peripheral Language Model) series introduces a novel model architecture to peripheral computing by delivering powerful language capabilities within the constraints of resource-limited devices. Through modeling and system co-design strategy, PLM optimizes model performance and fits edge system requirements, PLM employs **Multi-head Latent Attention** and **squared ReLU** activation to achieve sparsity, significantly reducing memory footprint and computational demands. Coupled with a meticulously crafted training regimen using curated datasets and a Warmup-Stable-Decay-Constant learning rate scheduler, PLM demonstrates superior performance compared to existing small language models, all while maintaining the lowest activated parameters, making it ideally suited for deployment on diverse peripheral platforms like mobile phones and Raspberry Pis.
28
+
29
+
30
+ **Here we present the static quants of https://huggingface.co/PLM-Team/PLM-1.8B-Instruct**
31
+
32
+ ## Provided Quants
33
+
34
+ | Link | Type | Size/GB | Notes |
35
+ |:-----|:-----|--------:|:------|
36
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-F16.gguf|F16| 3.66GB| Recommanded|
37
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q2_K.gguf|Q2_K| 827 MB| |
38
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q3_K_L.gguf|Q3_K_L| 1.09 GB| |
39
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q3_K_M.gguf|Q3_K_M| 1.01 GB| |
40
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q3_K_S.gguf|Q3_K_S| 912 MB| |
41
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_0.gguf|Q4_0| 1.11 GB| |
42
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_1.gguf|Q4_1| 1.21 GB| |
43
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_K_M.gguf|Q4_K_M| 1.18 GB| Recommanded|
44
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_K_S.gguf|Q4_K_S| 1.12 GB| |
45
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_0.gguf|Q5_0| 1.3 GB| |
46
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_1.gguf|Q5_1| 1.4 GB| |
47
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_K_M.gguf|Q5_K_M| 1.34 GB| |
48
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_K_S.gguf|Q5_K_S| 1.3 GB| |
49
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q6_K.gguf|Q6_K| 1.5 GB| |
50
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q8_0.gguf|Q8_0| 1.95 GB| Recommanded|
51
+
52
+ ## Usage (llama.cpp)
53
+
54
+ Now [llama.cpp](https://github.com/ggml-org/llama.cpp) supports our model. Here is the usage:
55
+
56
+ ```bash
57
+ git clone https://github.com/Si1w/llama.cpp.git
58
+ cd llama.cpp
59
+ ```
60
+
61
+ If you want to convert the orginal model into `gguf` form by yourself, you can
62
+
63
+ ```bash
64
+ pip install -r requirements.txt
65
+ python convert_hf_to_ggyf.py [model] --outtype {f32,f16,bf16,q8_0,tq1_0,tq2_0,auto}
66
+ ```
67
+
68
+ Then, we can build with CPU of GPU (e.g. Orin). The build is based on `cmake`.
69
+
70
+ - For CPU
71
+
72
+ ```bash
73
+ cmake -B build
74
+ cmake --build build --config Release
75
+ ```
76
+
77
+ - For GPU
78
+
79
+ ```bash
80
+ cmake -B build -DGGML_CUDA=ON
81
+ cmake --build build --config Release
82
+ ```
83
+
84
+ Don't forget to download the GGUF files of the PLM. We use the quantization methods in `llama.cpp` to generate the quantized PLM.
85
+
86
+ ```bash
87
+ huggingface-cli download --resume-download PLM-Team/PLM-1.8B-Instruct-gguf --local-dir PLM-Team/PLM-1.8B-Instruct-gguf
88
+ ```
89
+
90
+ After build the `llama.cpp`, we can use `llama-cli` script to launch the PLM.
91
+
92
+ ```bash
93
+ ./build/bin/llama-cli -m ./PLM-Team/PLM-1.8B-Instruct-gguf/PLM-1.8B-Instruct-Q8_0.gguf -cnv -p "hello!" -n 128
94
+ ```
95
+
96
+ ## Citation
97
+
98
+ If you find Project PLM helpful for your research or applications, please cite as follows:
99
+
100
+ ```
101
+ @misc{deng2025plmefficientperipherallanguage,
102
+ title={PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing},
103
+ author={Cheng Deng and Luoyang Sun and Jiwen Jiang and Yongcheng Zeng and Xinjian Wu and Wenxin Zhao and Qingfa Xiao and Jiachuan Wang and Lei Chen and Lionel M. Ni and Haifeng Zhang and Jun Wang},
104
+ year={2025},
105
+ eprint={2503.12167},
106
+ archivePrefix={arXiv},
107
+ primaryClass={cs.CL},
108
+ url={https://arxiv.org/abs/2503.12167},
109
+ }
110
+ ```