gghfez commited on
Commit
879c3cb
·
verified ·
1 Parent(s): 5693c47

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -0
README.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - rednote-hilab/dots.llm1.base
4
+ ---
5
+
6
+ # gghfez/dots.llm1.base-GGUF
7
+
8
+ This is the **base model**
9
+
10
+ Instruct model is here: [gghfez/dots.llm1.inst-GGUF](https://huggingface.co/gghfez/dots.llm1.inst-GGUF)
11
+
12
+ This requires a fork of llama.cpp
13
+
14
+ ## install
15
+ ```
16
+ git clone https://github.com/Noeda/llama.cpp
17
+ git checkout -b dots1
18
+ cmake # your usual cmake parameters / official documentation
19
+ ```
20
+
21
+ ## run
22
+ Use this following cli args to override the chat_template and special tokens:
23
+
24
+ ```
25
+ ./llama-cli -m ./dots.llm1.inst-GGUF/dots.1.base.q4_k.gguf-00001-of-00002.gguf --ctx-size 8192 --n-gpu-layers 64 -t 16 --temp 0.3 --chat-template "{% if messages[0]['role'] == 'system' %}<|system|>{{ messages[0]['content'] }}<|endofsystem|>{% set start_idx = 1 %}{% else %}<|system|>You are a helpful assistant.<|endofsystem|>{% set start_idx = 0 %}{% endif %}{% for idx in range(start_idx, messages|length) %}{% if messages[idx]['role'] == 'user' %}<|userprompt|>{{ messages[idx]['content'] }}<|endofuserprompt|>{% elif messages[idx]['role'] == 'assistant' %}<|response|>{{ messages[idx]['content'] }}<|endofresponse|>{% endif %}{% endfor %}{% if add_generation_prompt and messages[-1]['role'] == 'user' %}<|response|>{% endif %}" --jinja --override-kv tokenizer.ggml.bos_token_id=int:-1 --override-kv tokenizer.ggml.eos_token_id=int:151645 --override-kv tokenizer.ggml.pad_token_id=int:151645 --override-kv tokenizer.ggml.eot_token_id=int:151649 --override-kv tokenizer.ggml.eog_token_id=int:151649
26
+ ```
27
+
28
+ Thanks @shigureui for posting it [here](https://huggingface.co/gghfez/dots.llm1.inst-GGUF/discussions/1)