Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,15 @@ configs:
|
|
27 |
|
28 |
# Swedish skolprov documentation
|
29 |
|
30 |
-
This repository contains data from
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
```python
|
33 |
from datasets import load_dataset
|
@@ -59,22 +67,22 @@ ds = load_dataset("Ekgren/swedish_skolprov", "all")
|
|
59 |
## Small leaderboard
|
60 |
In the table below are the results on the tests for a small selection of models. The tests evaluate the ability to understand and answer questions in Swedish as well as specific knowledge with respect to subdomains.
|
61 |
|
62 |
-
| model_name | **total** | hp | matematik_och_fysikprovet | läkare | tandläkare | audionom | apotekare | open_weights |
|
63 |
-
|
64 |
-
| openai/o3-mini | **0.85** | 1.9 | 0.77 | 0.89 | 0.80 | 0.82 | 0.96 | ❌ |
|
65 |
-
| anthropic/claude-3.7-sonnet | **0.84** | 2.0 | 0.62 | 0.88 | 0.87 | 0.85 | 1.00 | ❌ |
|
66 |
-
| openai/gpt-4o | **0.82** | 2.0 | 0.51 | 0.89 | 0.87 | 0.89 | 0.98 | ❌ |
|
67 |
-
| google/gemini-2.0-flash-001 | **0.82** | 2.0 | 0.63 | 0.87 | 0.81 | 0.81 | 0.96 | ❌ |
|
68 |
-
| meta-llama/llama-3.1-405b-instruct | **0.81** | 2.0 | 0.55 | 0.91 | 0.76 | 0.82 | 0.96 | ✅ |
|
69 |
-
| deepseek/deepseek-V3 | **0.80** | 2.0 | 0.60 | 0.83 | 0.71 | 0.89 | 0.94 | ✅ |
|
70 |
-
| x-ai/grok-2-1212 | **0.80** | 2.0 | 0.50 | 0.88 | 0.80 | 0.82 | 0.98 | ❌ |
|
71 |
-
| meta-llama/llama-3.3-70b-instruct | **0.78** | 1.9 | 0.53 | 0.82 | 0.77 | 0.82 | 0.94 | ✅ |
|
72 |
-
| qwen/qwen-max | **0.77** | 1.7 | 0.60 | 0.84 | 0.66 | 0.78 | 0.92 | ❌ |
|
73 |
-
| cohere/command-a | **0.72** | 1.8 | 0.48 | 0.78 | 0.70 | 0.72 | 0.89 | ✅ |
|
74 |
-
| mistralai/mistral-small-3.1-24b-instruct| **0.71** | 1.6 | 0.50 | 0.78 | 0.61 | 0.76 | 0.91 | ✅ |
|
75 |
-
| google/gemma-3-27b-it | **0.67** | 1.8 | 0.50 | 0.64 | 0.69 | 0.72 | 0.70 | ✅ |
|
76 |
-
| allenai/olmo-2-0325-32b-instruct | **0.57** | 1.3 | 0.36 | 0.63 | 0.50 | 0.53 | 0.77 | ✅ |
|
77 |
-
| meta-llama/llama-3.2-3b-instruct | **0.36** | 0.7 | 0.19 | 0.46 | 0.41 | 0.31 | 0.38 | ✅ |
|
78 |
|
79 |
|
80 |
## The tests
|
|
|
27 |
|
28 |
# Swedish skolprov documentation
|
29 |
|
30 |
+
This repository contains data from six Swedish knowledge tests. The data is in the form of multiple-choice questions and answers. The data is stored in CSV files.
|
31 |
+
|
32 |
+
The following tests are in the data:
|
33 |
+
- högskoleprovet (Swedish Scholastic Aptitude Test)
|
34 |
+
- kunskapsprov läkare (medical doctor test)
|
35 |
+
- kunskapsprov tandläkare (dentist test)
|
36 |
+
- kunskapsprov audionom
|
37 |
+
- kunskapsprov apotekare
|
38 |
+
- matematik och fysikprovet (math and highschool test)
|
39 |
|
40 |
```python
|
41 |
from datasets import load_dataset
|
|
|
67 |
## Small leaderboard
|
68 |
In the table below are the results on the tests for a small selection of models. The tests evaluate the ability to understand and answer questions in Swedish as well as specific knowledge with respect to subdomains.
|
69 |
|
70 |
+
| rank | model_name | **total** | hp | matematik_och_fysikprovet | läkare | tandläkare | audionom | apotekare | open_weights |
|
71 |
+
|------|-----------------------------------------|-----------|---------|---------------------------------|-------------|-----------------|---------------|----------------|--------------|
|
72 |
+
| 1. | openai/o3-mini | **0.85** | 1.9 | 0.77 | 0.89 | 0.80 | 0.82 | 0.96 | ❌ |
|
73 |
+
| 2. | anthropic/claude-3.7-sonnet | **0.84** | 2.0 | 0.62 | 0.88 | 0.87 | 0.85 | 1.00 | ❌ |
|
74 |
+
| 3. | openai/gpt-4o | **0.82** | 2.0 | 0.51 | 0.89 | 0.87 | 0.89 | 0.98 | ❌ |
|
75 |
+
| 4. | google/gemini-2.0-flash-001 | **0.82** | 2.0 | 0.63 | 0.87 | 0.81 | 0.81 | 0.96 | ❌ |
|
76 |
+
| 5. | meta-llama/llama-3.1-405b-instruct | **0.81** | 2.0 | 0.55 | 0.91 | 0.76 | 0.82 | 0.96 | ✅ |
|
77 |
+
| 6. | deepseek/deepseek-V3 | **0.80** | 2.0 | 0.60 | 0.83 | 0.71 | 0.89 | 0.94 | ✅ |
|
78 |
+
| 7. | x-ai/grok-2-1212 | **0.80** | 2.0 | 0.50 | 0.88 | 0.80 | 0.82 | 0.98 | ❌ |
|
79 |
+
| 8. | meta-llama/llama-3.3-70b-instruct | **0.78** | 1.9 | 0.53 | 0.82 | 0.77 | 0.82 | 0.94 | ✅ |
|
80 |
+
| 9. | qwen/qwen-max | **0.77** | 1.7 | 0.60 | 0.84 | 0.66 | 0.78 | 0.92 | ❌ |
|
81 |
+
| 10. | cohere/command-a | **0.72** | 1.8 | 0.48 | 0.78 | 0.70 | 0.72 | 0.89 | ✅ |
|
82 |
+
| 11. | mistralai/mistral-small-3.1-24b-instruct| **0.71** | 1.6 | 0.50 | 0.78 | 0.61 | 0.76 | 0.91 | ✅ |
|
83 |
+
| 12. | google/gemma-3-27b-it | **0.67** | 1.8 | 0.50 | 0.64 | 0.69 | 0.72 | 0.70 | ✅ |
|
84 |
+
| 13. | allenai/olmo-2-0325-32b-instruct | **0.57** | 1.3 | 0.36 | 0.63 | 0.50 | 0.53 | 0.77 | ✅ |
|
85 |
+
| 14. | meta-llama/llama-3.2-3b-instruct | **0.36** | 0.7 | 0.19 | 0.46 | 0.41 | 0.31 | 0.38 | ✅ |
|
86 |
|
87 |
|
88 |
## The tests
|