File size: 150 Bytes
5fa1a76
 
 
1
2
3
Usage tips and examples

The Llama2 family models, on which Code Llama is based, were trained using bfloat16, but the original inference uses float16.