Duguce commited on
Commit
4594a15
·
1 Parent(s): 432130c

docs: update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -3
README.md CHANGED
@@ -11,9 +11,16 @@ configs:
11
  TurtleBench is a novel evaluation benchmark designed to assess the reasoning capabilities of large language models (LLMs) using yes/no puzzles (commonly known as "Turtle Soup puzzles"). This dataset is constructed based on user guesses collected from our online Turtle Soup Puzzle platform, providing a dynamic and interactive means of evaluation. Unlike traditional static evaluation benchmarks, TurtleBench focuses on testing models in interactive settings to better capture their logical reasoning performance. The dataset contains real user guesses and annotated responses, enabling a fair and challenging evaluation for modern LLMs.
12
  ## Dataset Contents
13
  The dataset is organized into two main folders: `english` and `chinese`, corresponding to the bilingual nature of the TurtleBench benchmark. Each language folder contains:
14
- - `cases.list`: A list of the Turtle Soup cases used in the dataset.
15
- - `stories.json`: JSON file containing the surface stories and their corresponding "bottom" stories, which provide the hidden context required to answer the puzzles.
16
- - `titles.txt` (in the chinese folder only): A list of titles for the stories.
 
 
 
 
 
 
 
17
 
18
  ## Data Collection
19
  The dataset contains 1,532 entries derived from over 26,000 user guesses made during the Turtle Soup Puzzle game. Users were tasked with making logical guesses based solely on the surface stories provided, while the correct answers were derived from the bottom stories. All user guesses were annotated as either "Correct" or "Incorrect" based on the reasoning context provided by the bottom story.
 
11
  TurtleBench is a novel evaluation benchmark designed to assess the reasoning capabilities of large language models (LLMs) using yes/no puzzles (commonly known as "Turtle Soup puzzles"). This dataset is constructed based on user guesses collected from our online Turtle Soup Puzzle platform, providing a dynamic and interactive means of evaluation. Unlike traditional static evaluation benchmarks, TurtleBench focuses on testing models in interactive settings to better capture their logical reasoning performance. The dataset contains real user guesses and annotated responses, enabling a fair and challenging evaluation for modern LLMs.
12
  ## Dataset Contents
13
  The dataset is organized into two main folders: `english` and `chinese`, corresponding to the bilingual nature of the TurtleBench benchmark. Each language folder contains:
14
+
15
+ - **Final Dataset**:
16
+ - `zh_data-00000-of-00001.jsonl` (in the `chinese` folder): The complete, finalized dataset in JSONL format.
17
+ - `en_data-00000-of-00001.jsonl` (in the `english` folder): The complete, finalized dataset in JSONL format.
18
+
19
+ - **Staging Data** (in the `staging` subfolder of each language):
20
+ - `cases.list`: A list of the Turtle Soup cases used in the dataset.
21
+ - `stories.json`: JSON file containing the surface stories and their corresponding "bottom" stories, which provide the hidden context required to answer the puzzles.
22
+ - `titles.txt` (in the `chinese/staging` folder only): A list of titles for the stories.
23
+
24
 
25
  ## Data Collection
26
  The dataset contains 1,532 entries derived from over 26,000 user guesses made during the Turtle Soup Puzzle game. Users were tasked with making logical guesses based solely on the surface stories provided, while the correct answers were derived from the bottom stories. All user guesses were annotated as either "Correct" or "Incorrect" based on the reasoning context provided by the bottom story.