This repository contains the GPT-Soft-1.2b model described in Cottention: Linear Transformers With Cosine Attention.

Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including gmongaras/Softmax_Attention_GPT_1.2B