Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
|
|
32 |
- **Model Developers:** Neural Magic
|
33 |
|
34 |
Quantized version of [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
|
35 |
-
It achieves scores within
|
36 |
|
37 |
### Model Optimizations
|
38 |
|
@@ -200,9 +200,9 @@ This version of the lm-evaluation-harness includes versions of MMLU, ARC-Challen
|
|
200 |
</td>
|
201 |
<td>78.06
|
202 |
</td>
|
203 |
-
<td>
|
204 |
</td>
|
205 |
-
<td>
|
206 |
</td>
|
207 |
</tr>
|
208 |
<tr>
|
@@ -210,9 +210,9 @@ This version of the lm-evaluation-harness includes versions of MMLU, ARC-Challen
|
|
210 |
</td>
|
211 |
<td>54.48
|
212 |
</td>
|
213 |
-
<td>
|
214 |
</td>
|
215 |
-
<td>
|
216 |
</td>
|
217 |
</tr>
|
218 |
<tr>
|
@@ -220,9 +220,9 @@ This version of the lm-evaluation-harness includes versions of MMLU, ARC-Challen
|
|
220 |
</td>
|
221 |
<td><strong>74.25</strong>
|
222 |
</td>
|
223 |
-
<td><strong>
|
224 |
</td>
|
225 |
-
<td><strong>
|
226 |
</td>
|
227 |
</tr>
|
228 |
</table>
|
|
|
32 |
- **Model Developers:** Neural Magic
|
33 |
|
34 |
Quantized version of [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
|
35 |
+
It achieves scores within 3.1% of the scores of the unquantized model for MMLU, ARC-Challenge, GSM-8k, Hellaswag and Winogrande, and TruthfulQA.
|
36 |
|
37 |
### Model Optimizations
|
38 |
|
|
|
200 |
</td>
|
201 |
<td>78.06
|
202 |
</td>
|
203 |
+
<td>77.98
|
204 |
</td>
|
205 |
+
<td>99.9%
|
206 |
</td>
|
207 |
</tr>
|
208 |
<tr>
|
|
|
210 |
</td>
|
211 |
<td>54.48
|
212 |
</td>
|
213 |
+
<td>52.81
|
214 |
</td>
|
215 |
+
<td>96.9%
|
216 |
</td>
|
217 |
</tr>
|
218 |
<tr>
|
|
|
220 |
</td>
|
221 |
<td><strong>74.25</strong>
|
222 |
</td>
|
223 |
+
<td><strong>73.24</strong>
|
224 |
</td>
|
225 |
+
<td><strong>98.6%</strong>
|
226 |
</td>
|
227 |
</tr>
|
228 |
</table>
|