|
--- |
|
license: cc-by-nc-4.0 |
|
datasets: |
|
- tatsu-lab/alpaca |
|
language: |
|
- en |
|
|
|
--- |
|
# Eluwa: A Conversational LoRA for Facebook's OPT 2.7b Architecture |
|
|
|
 |
|
|
|
Eluwa is a fine-tuned Low-Rank Adapter (LoRA) model for Facebook's OPT 2.7b. It is trained on the Stanford Alpaca dataset. Eluwa is designed to provide a more conversational and creative experience in question-answering mode compared to the default OPT model. The idea was that OPT was too curt (and frankly, a bit of an asshole) for a model of its size, and that we could finetune it like Alpaca did to Llama. |
|
|
|
begin{table}[!ht] |
|
\centering |
|
\begin{tabular}{|l|l|l|l|} |
|
\hline |
|
Model & OPT 2.7b base & Eluwa 2.7b 1000 iter & Eluwa 2.7b 2 epoch \\ \hline |
|
Generic & 22 & 44 & 57 \\ \hline |
|
Knowledge & 35 & 60 & 72 \\ \hline |
|
Roleplay & 29 & 38 & 58 \\ \hline |
|
Common sense & 20 & 48 & 50 \\ \hline |
|
Fermi & 4 & 28 & 23 \\ \hline |
|
Counterfactual & 5 & 24 & 23 \\ \hline |
|
Coding & 2 & 7 & 7 \\ \hline |
|
Math & 0 & 3 & 3 \\ \hline |
|
Writing & 8 & 19 & 19 \\ \hline |
|
Total & 125 & 271 & 312 \\ \hline |
|
\end{tabular} |
|
\end{table} |
|
|
|
|
|
Response times are fast: on my GTX 1080ti + Ryzen 3600,it generates between 1.14 tokens/s and 3.77 tokens/s. |