nielsr HF Staff commited on
Commit
8b0ed71
·
verified ·
1 Parent(s): 8d11a3d

Update dataset card

Browse files

This PR updates the dataset card to include the paper link and change the task category to `question-answering`.

Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -1,21 +1,21 @@
1
  ---
2
- license: mit
3
- task_categories:
4
- - text-generation
5
  language:
6
  - en
7
  - zh
 
 
 
 
 
 
8
  tags:
9
  - LRM
10
  - System1
11
  - fast-thinking
12
- pretty_name: S1-Bench
13
- size_categories:
14
- - n<1K
15
  ---
16
 
17
- The benchmark constructed in paper ***S1-Bench: A Simple Benchmark for Evaluating System 1 Thinking Capability of Large Reasoning Models***.
18
 
19
  S1-Bench is a novel benchmark designed to evaluate Large Reasoning Models' performance on simple tasks that favor intuitive *system 1* thinking rather than deliberative *system 2* reasoning.
20
 
21
- S1-Bench comprises 422 question-answer pairs across four major categories and 28 subcategories, balanced with 220 English and 202 Chinese questions.
 
1
  ---
 
 
 
2
  language:
3
  - en
4
  - zh
5
+ license: mit
6
+ size_categories:
7
+ - n<1K
8
+ task_categories:
9
+ - question-answering
10
+ pretty_name: S1-Bench
11
  tags:
12
  - LRM
13
  - System1
14
  - fast-thinking
 
 
 
15
  ---
16
 
17
+ The benchmark constructed in paper [S1-Bench: A Simple Benchmark for Evaluating System 1 Thinking Capability of Large Reasoning Models](https://huggingface.co/papers/2504.10368).
18
 
19
  S1-Bench is a novel benchmark designed to evaluate Large Reasoning Models' performance on simple tasks that favor intuitive *system 1* thinking rather than deliberative *system 2* reasoning.
20
 
21
+ S1-Bench comprises 422 question-answer pairs across four major categories and 28 subcategories, balanced with 220 English and 202 Chinese questions.