FLIP-Challenge / README.md
aplesner-eth's picture
Update README.md
9d049a8 verified
metadata
language:
  - en
tags:
  - VLMs
  - Reasoning
  - Language
  - Vision
  - Image
  - Understanding
pretty_name: FLIP Reasoning Challenge

FLIP Reasoning Challenge Dataset

This repository contains the FLIP dataset, a benchmark for evaluating AI reasoning capabilities based on human verification tasks from the Idena blockchain. The dataset focuses on testing sequential reasoning, visual storytelling, and common sense understanding in multimodal AI systems.

Paper: https://arxiv.org/abs/2504.12256.

Dataset Description

FLIP challenges present users with two orderings (stacks) of 4 images, requiring them to identify which ordering forms a coherent story. These tasks are designed to test complex reasoning abilities rather than simple recognition.

Key features of the FLIP dataset:

  • Created from human-generated and human-verified tasks from the Idena blockchain
  • Tests sequential reasoning and visual storytelling abilities
  • Provides clear ground truth, making it easy to diagnose model failures
  • High human performance baseline (95.3% accuracy)

Dataset Structure and Overview

flip_dataset/
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ images/
β”‚   β”‚   β”œβ”€β”€ image1.png
β”‚   β”‚   β”œβ”€β”€ image2.png
β”‚   β”‚   └── ...
β”‚   └── tasks/
β”‚       β”œβ”€β”€ task1.json
β”‚       β”œβ”€β”€ task2.json
β”‚       └── ...
β”œβ”€β”€ validation/
β”‚   β”œβ”€β”€ images/
β”‚   └── tasks/
└── test/
    β”œβ”€β”€ images/
    └── tasks/

dataset_info: features: - name: task_id dtype: string - name: task_data dtype: string - name: image_id dtype: string - name: image dtype: image splits: - name: train num_bytes: 1381093867.37 num_examples: 34210 - name: test num_bytes: 289834313.958 num_examples: 7354 - name: validation num_bytes: 297405372.216 num_examples: 7317 download_size: 1834376645 dataset_size: 1968333553.544

Task Format

Each task is stored as a JSON file with the following structure:

{
  "task_id": "_flip_bafkreianuvtem5nababzw5z4iscr5ocvgaviilmemwn3o73jkak7bqrjde",
  "images": {
    "0": "46efd91c-be17-42b8-8f5e-2a84b96d21af",
    "1": "9d1fac84-0c9f-4ab7-9d3b-a3b4c61dc390",
    "2": "ceecdc8b-840c-46d7-b694-74f05839447f",
    "3": "cbdf27d1-aa84-405b-86db-cb336d0bc4a7"
  },
  "left_stack": ["2", "3", "1", "0"],
  "right_stack": ["3", "0", "2", "1"],
  "agreed_answer": ["Right", "Strong"],
  "votes": {"Left": "1", "Right": "4", "Reported": "0"},
  "details": {
    "Author:": "0x63f7aa6C19A0f7D4BBB4177000Af671ED212e490",
    "Epoch:": "#0027",
    "Size:": "86140 bytes",
    "Created:": "12/24/2019 13:23:51",
    "Block:": "669858",
    "Tx:": "0xdbca60c3d10770f4bc2f73fd9119d9509117a8db08196f128382bffbf3d8c79f"
  }
}

When processing tasks:

  • The task ID is derived from the name field by replacing "/" with "_"
  • Image IDs are extracted by removing the prefix "blob:https://scan.idena.io/"
  • The dataset stores the image orderings as "left stack" and "right stack"
  • Images are shuffled to prevent any accidental ordering cues

Dataset Statistics

  • Total flips: 11,674
  • Train set: 3,502 flips (30%)
  • Validation set: 3,502 flips (30%)
  • Test set: 4,670 flips (40%)
  • Small subsets are also available for computationally intensive experimentation

Solutions are nearly evenly distributed between Left (49.4%) and Right (50.6%), with most challenges having strong consensus (95.7%).

Research Findings

The FLIP dataset has been used to evaluate various state-of-the-art AI models:

  • Best open-source models achieve 75.5% accuracy in zero-shot settings
  • Best closed-source models reach 77.9% accuracy
  • Human performance is 95.3% accurate
  • Captioning models aid reasoning models by providing text descriptions
  • Ensemble methods can boost performance to 85.2%

These findings highlight the gap between current AI capabilities and human-level reasoning on complex multimodal tasks.

Citation

If you use this dataset in your research, please cite:

@inproceedings{plesner2025flip,
  title={FLIP Reasoning Challenge},
  author={Plesner, Andreas and Kuzhagaliyev, Turlan and Wattenhofer, Roger},
  booktitle={First Workshop on Open Science for Foundation Models at ICLR 2025},
  year={2025}
}

Acknowledgements

This dataset is derived from the Idena blockchain. We thank the Idena community for creating and validating these challenges.

Contact

For questions or feedback, please contact: