Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
Usage tips and examples
The Llama2 family models, on which Code Llama is based, were trained using bfloat16, but the original inference uses float16.