We follow the “Textbooks Are All You Need” approach, focusing this time on common | |
sense reasoning in natural language, and create a new 1.3 billion parameter model named phi-1.5, | |
with performance on natural language tasks comparable to models 5x larger, and surpassing most | |
non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic | |
coding. |