] | |
Here's what this will look like without a generation prompt, using the ChatML template we saw in the Zephyr example: | |
python | |
tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False) | |
"""<|im_start|>user | |
Hi there!<|im_end|> | |
<|im_start|>assistant | |
Nice to meet you!<|im_end|> | |
<|im_start|>user | |
Can I ask a question?<|im_end|> | |
""" | |
And here's what it looks like with a generation prompt: | |
python | |
tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) | |
"""<|im_start|>user | |
Hi there!<|im_end|> | |
<|im_start|>assistant | |
Nice to meet you!<|im_end|> | |
<|im_start|>user | |
Can I ask a question?<|im_end|> | |
<|im_start|>assistant | |
""" | |
Note that this time, we've added the tokens that indicate the start of a bot response. |