GGUF
conversational

Can't wait to see the benchmark

#1
by PussyHut - opened

@xbruce22 Please add this to MMLU Pro benchmark

Yes, in queue

Scored 65.00. Added logs in discussion.

Scored 65.00. Added logs in discussion.

better than unsloth's Q4? That's shocking.

Yes, I don't know why.

Let me check Q2 of unsloth.

Let me check Q2 of unsloth.

& intel q4 k m 😁

Scored 0.6 Q2_K

+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Model                                     | Dataset   | Metric          | Subset           |   Num |   Score | Cat.0   |
+===========================================+===========+=================+==================+=======+=========+=========+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | computer science |    10 |     0.6 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | math             |    10 |     0.6 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | chemistry        |    10 |     0.8 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | engineering      |    10 |     0.5 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | law              |    10 |     0.1 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | biology          |    10 |     0.9 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | health           |    10 |     0.8 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | physics          |    10 |     0.6 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | business         |    10 |     0.7 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | philosophy       |    10 |     0.5 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | economics        |    10 |     0.7 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | other            |    10 |     0.7 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | psychology       |    10 |     0.4 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | history          |    10 |     0.5 | default |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+
| Qwen3-Coder-30B-A3B-Instruct-1M-GGUF-Q2_K | mmlu_pro  | AverageAccuracy | OVERALL          |   140 |     0.6 | -       |
+-------------------------------------------+-----------+-----------------+------------------+-------+---------+---------+

Scored 0.6 Q2_K

damn, seems the capability shrink from Q4 to Q2 for LLMs was lower than I had expected before

0.6 means 60.00 BTW.

Intel org

Thanks @xbruce22 @Push . Based on the logs and the above discussion, does it mean that Intel Q2 model (0.65) achieves higher accuracy than unsloth Q2 model (0.6)?

Thanks @xbruce22 @Push . Based on the logs and the above discussion, does it mean that Intel Q2 model (0.65) achieves higher accuracy than unsloth Q2 model (0.6)?

yes, for this testbench at least

Thanks @xbruce22 @Push . Based on the logs and the above discussion, does it mean that Intel Q2 model (0.65) achieves higher accuracy than unsloth Q2 model (0.6)?

Yes, So far I have tested 2 models from Intel, Qwen 30B coder and instruct. Even instruct works better with Intel's quantized version, check our here

71.2 Q2_K_S.gguf 10.7GB (from intel)
70.7 Q2_K.gguf 11.3GB (from unsloth)
76.0 Q8_0.gguf 32.5GB (from unsloth)

Let me check Q2 of unsloth.

& intel q4 k m 😁

Scored 64.29

Let me check Q2 of unsloth.

& intel q4 k m 😁

Scored 64.29

thanks for your time

Intel q2ks even beat their own q4km 😧 am I dreaming?

considering that each sub-item was run 10 times, can we say that there is no significant deviation?

Check my other discussion here

using ollama I got 64.29 but with llama.cpp scored improved to 68.57 for 4bit. So Its differences in ollama (nvidia-gpu) vs llama.cpp (CPU) framework.

Intel org

Check the size and some ops are in higher precision in Q2 model which means it's more sensitive to the accuracy.

Sign up or log in to comment