michaelfeil commited on
Commit
842a015
·
1 Parent(s): 154a495

Upload mosaicml/mpt-7b ctranslate fp16 weights

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -21,14 +21,14 @@ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on
21
 
22
  quantized version of [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)
23
  ```bash
24
- pip install hf-hub-ctranslate2>=2.0.8
25
  ```
26
- Converted on 2023-05-30 using
27
  ```
28
- ct2-transformers-converter --model mosaicml/mpt-7b --output_dir /home/michael/tmp-ct2fast-mpt-7b --force --copy_files configuration_mpt.py meta_init_context.py tokenizer.json hf_prefixlm_converter.py README.md tokenizer_config.json blocks.py adapt_tokenizer.py attention.py norm.py generation_config.json flash_attn_triton.py special_tokens_map.json param_init_fns.py .gitattributes --quantization float16 --trust_remote_code
29
  ```
30
 
31
- Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
32
  - `compute_type=int8_float16` for `device="cuda"`
33
  - `compute_type=int8` for `device="cpu"`
34
 
@@ -47,7 +47,8 @@ model = GeneratorCT2fromHfHub(
47
  )
48
  outputs = model.generate(
49
  text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
50
- max_length=64
 
51
  )
52
  print(outputs)
53
  ```
 
21
 
22
  quantized version of [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)
23
  ```bash
24
+ pip install hf-hub-ctranslate2>=2.0.8 ctranslate2>=3.14.0
25
  ```
26
+ Converted on 2023-05-31 using
27
  ```
28
+ ct2-transformers-converter --model mosaicml/mpt-7b --output_dir /home/michael/tmp-ct2fast-mpt-7b --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization float16 --trust_remote_code
29
  ```
30
 
31
+ Checkpoint compatible to [ctranslate2>=3.14.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.8](https://github.com/michaelfeil/hf-hub-ctranslate2)
32
  - `compute_type=int8_float16` for `device="cuda"`
33
  - `compute_type=int8` for `device="cpu"`
34
 
 
47
  )
48
  outputs = model.generate(
49
  text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
50
+ max_length=64,
51
+ include_prompt_in_result=False
52
  )
53
  print(outputs)
54
  ```