Text Generation
PEFT
Safetensors
mistral
conversational
Eval Results
File size: 9,150 Bytes
db3bf65
cea9fc6
82a1688
 
 
 
 
 
 
 
 
 
 
e660611
82a1688
ac2cfe3
 
51f4d06
ac2cfe3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d17f6b
ac2cfe3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cea9fc6
 
 
 
 
ac2cfe3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
---
library_name: peft
license: llama2
datasets:
- ehartford/dolphin
tags:
- llama-2
inference: false
pipeline_tag: text-generation
---

# llama-2-13b-dolphin 🦙🐬

This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the first 100k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single A6000 (48 GB) for roughly 18 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.

* Model license: Llama 2 Community License Agreement
* Basic usage: [notebook](assets/basic_inference_llama_2_13b_dolphin.ipynb)
* Finetuning script: [script](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/sft-llama-2-13b-dolphin-peft.py)
* Loss curves: [plot](https://huggingface.co/dfurman/llama-2-13b-dolphin-peft#finetuning-description)
* Runtime stats: [table](https://huggingface.co/dfurman/llama-2-13b-dolphin-peft#runtime-tests)

### Example prompts and responses

Example 1: 

**User**:
>You are a helpful assistant. Write me a numbered list of things to do in New York City.\n

**llama-2-13b-dolphin-peft**:

> 1. Visit the iconic Statue of Liberty and Ellis Island.
> 2. Take a stroll through Central Park and enjoy its many attractions.
> 3. Explore the world-renowned museums, such as the Metropolitan Museum of Art and the Museum of Modern Art.
> 4. Experience the vibrant energy of Times Square and take in the bright lights and billboards.
> 5. Visit the 9/11 Memorial and Museum to pay tribute to those who lost their lives in the attacks.
> 6. Enjoy a Broadway show or a concert at one of the many theaters and venues in the city.
> 7. Take a ride on the Staten Island Ferry for a free view of the Statue of Liberty and the New York City skyline.
> 8. Shop at the famous Fifth Avenue stores and explore the high-end boutiques.
> 9. Indulge in a variety of cuisines at one of the many restaurants in the city.
> 10. Visit the Empire State Building and enjoy the panoramic views of the city from the observation deck.

<br>

Example 2: 

**User**:
>You are a helpful assistant. Write a short email inviting my friends to a dinner party on Friday. Respond succinctly.\n"

**llama-2-13b-dolphin-peft**:

> Subject: Friday Dinner Party Invitation
> 
> Dear Friends,
> 
> I hope this email finds you well. I'm excited to invite you all to a dinner party on Friday, March 10th, at 7:00 PM. The address is 123 Main Street, Anytown, USA.
> 
> Please RSVP by Wednesday, March 8th, so I can plan accordingly. I look forward to seeing you all and sharing a delicious meal together!
> 
> Best,
> Your Friendly Assistant
> 
> P.S. If you have any dietary restrictions or allergies, please let me know in your RSVP. Thank you!

<br>

## Model Description

The architecture is a modification of a standard decoder-only transformer.

The llama-2-13b models have been modified from a standard transformer in the following ways:
* It uses the [SwiGLU activation function](https://arxiv.org/abs/2002.05202)
* It uses [rotary positional embeddings](https://arxiv.org/abs/2104.09864) (RoPE)

| Hyperparameter | Value |
|----------------|-------|
| n_parameters | 13B |
| tokens | 2.0T |
| vocab size | 32000 |
| sequence length | 4096 |

## Finetuning Description

This model was trained on a single A6000 (48 GB) for about 18 hours using the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.

![loss curves](https://raw.githubusercontent.com/daniel-furman/sft-demos/main/assets/jul_24_23_1_13_00_log_loss_curves_llama-2-13b-dolphin.png)

The above loss curve was generated from the run's private wandb.ai log.  

## PreTraining Data

For more details on the pretraining process, see [Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf).

The data was tokenized using the [Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) tokenizer.

## Limitations and Biases

_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_

This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

## How to Use

Basic usage: [notebook](assets/basic_inference_llama_2_13b_dolphin.ipynb)

Install and import the package dependencies:  

```python
!pip install -q -U huggingface_hub peft transformers torch accelerate
```

```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
```

Sign into a HF account with access to Llama-2:

```python
from huggingface_hub import notebook_login
notebook_login()
```

Basic model loading:

```python
peft_model_id = "dfurman/llama-2-13b-dolphin-peft"
config = PeftConfig.from_pretrained(peft_model_id)

tokenizer = AutoTokenizer.from_pretrained(
    config.base_model_name_or_path,
    use_auth_token=True
)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
    config.base_model_name_or_path,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    use_auth_token=True,
)

# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
```

Once loaded, the model and tokenizer can be used with the following code:

```python
def llama_generate(
    model: AutoModelForCausalLM,
    tokenizer: AutoTokenizer,
    prompt: str,
    max_new_tokens: int = 128,
    temperature: float = 0.92,
) -> str:
    """
    Initialize the pipeline
    Uses Hugging Face GenerationConfig defaults
        https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/text_generation#transformers.GenerationConfig
    Args:
        model (transformers.AutoModelForCausalLM): Falcon model for text generation
        tokenizer (transformers.AutoTokenizer): Tokenizer for model
        prompt (str): Prompt for text generation
        max_new_tokens (int, optional): Max new tokens after the prompt to generate. Defaults to 128.
        temperature (float, optional): The value used to modulate the next token probabilities.
            Defaults to 1.0
    """
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

    inputs = tokenizer(
        [prompt],
        return_tensors="pt",
        return_token_type_ids=False,
    ).to(
        device
    )  # tokenize inputs, load on device

    # when running Torch modules in lower precision, it is best practice to use the torch.autocast context manager.
    with torch.autocast("cuda", dtype=torch.bfloat16):
        response = model.generate(
            **inputs,
            max_new_tokens=max_new_tokens,
            temperature=temperature,
            return_dict_in_generate=True,
            eos_token_id=tokenizer.eos_token_id,
            pad_token_id=tokenizer.pad_token_id,
        )

    decoded_output = tokenizer.decode(
        response["sequences"][0],
        skip_special_tokens=True,
    )  # grab output in natural language

    return decoded_output[len(prompt) :]  # remove prompt from output
```

We can now generate text! For example:

```python
prompt = "### Human: Write me a numbered list of things to do in New York City.### Assistant: "

response = llama_generate(
    model,
    tokenizer,
    prompt,
    max_new_tokens=250,
    temperature=0.92,
)

print(response)
```

### Runtime tests


| runtime / 50 tokens (sec) | GPU             | attn | torch dtype | VRAM (GB) |
|:-----------------------------:|:----------------------:|:---------------------:|:-------------:|:-----------------------:|
| 2.93                        | 1x A100 (40 GB SXM)  | torch               | bfloat16    | 25                    |
| 3.24                        | 1x A6000 (48 GB)  | torch               | bfloat16    | 25                    |

The above runtime stats were generated from this [notebook](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/postprocessing-llama-2-13b-dolphin-peft.ipynb). 

## Acknowledgements

This model was finetuned by Daniel Furman on July 22, 2023 and is intended primarily for research purposes.

## Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.

## Meta citation for llama-2 blog

```
@online{Meta2023Introducing,
    author    = {Meta AI},
    title     = {Meta and Microsoft Introduce the Next Generation of Llama},
    year      = {2023},
    url       = {https://about.fb.com/news/2023/07/llama-2/},
    note      = {Accessed: 2023-07-24},
    urldate   = {2023-07-24}
}
```

---

### Framework versions


- PEFT 0.5.0.dev0