English
Yudhanjaya commited on
Commit
5d9250f
·
1 Parent(s): 6e1bfb0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -2
README.md CHANGED
@@ -10,6 +10,25 @@ language:
10
 
11
  ![logo](https://huggingface.co/BackyardLabs/Eluwa/resolve/main/ELUWA-LOGO.jpg "baaaaaaaaaaaa")
12
 
13
- Eluwa is a fine-tuned LoRA model based on Facebook's OPT 2.7b architecture and trained on the Stanford Alpaca dataset. Eluwa is designed to provide a more conversational and creative experience in question-answering mode compared to the default OPT model. The idea was that OPT was too curt (and frankly, a bit of an asshole) for a model of its size, and that we could finetune it like Alpaca did to Llama.
14
 
15
- It worked! Based on very limited testing, it's about halfway to GPT 3.5. Response times are fast: on my GTX 1080ti + Ryzen 3600,it generates between 1.14 tokens/s and 3.77 tokens/s.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
  ![logo](https://huggingface.co/BackyardLabs/Eluwa/resolve/main/ELUWA-LOGO.jpg "baaaaaaaaaaaa")
12
 
13
+ Eluwa is a fine-tuned Low-Rank Adapter (LoRA) model for Facebook's OPT 2.7b. It is trained on the Stanford Alpaca dataset. Eluwa is designed to provide a more conversational and creative experience in question-answering mode compared to the default OPT model. The idea was that OPT was too curt (and frankly, a bit of an asshole) for a model of its size, and that we could finetune it like Alpaca did to Llama.
14
 
15
+ begin{table}[!ht]
16
+ \centering
17
+ \begin{tabular}{|l|l|l|l|}
18
+ \hline
19
+ Model & OPT 2.7b base & Eluwa 2.7b 1000 iter & Eluwa 2.7b 2 epoch \\ \hline
20
+ Generic & 22 & 44 & 57 \\ \hline
21
+ Knowledge & 35 & 60 & 72 \\ \hline
22
+ Roleplay & 29 & 38 & 58 \\ \hline
23
+ Common sense & 20 & 48 & 50 \\ \hline
24
+ Fermi & 4 & 28 & 23 \\ \hline
25
+ Counterfactual & 5 & 24 & 23 \\ \hline
26
+ Coding & 2 & 7 & 7 \\ \hline
27
+ Math & 0 & 3 & 3 \\ \hline
28
+ Writing & 8 & 19 & 19 \\ \hline
29
+ Total & 125 & 271 & 312 \\ \hline
30
+ \end{tabular}
31
+ \end{table}
32
+
33
+
34
+ Response times are fast: on my GTX 1080ti + Ryzen 3600,it generates between 1.14 tokens/s and 3.77 tokens/s.