Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Fundus-MMBench / README.md
MeteorElf's picture
Update README.md
2713db0 verified
metadata
dataset_info:
  features:
    - name: index
      dtype: int64
    - name: question
      dtype: string
    - name: A
      dtype: string
    - name: B
      dtype: string
    - name: C
      dtype: string
    - name: D
      dtype: string
    - name: E
      dtype: string
    - name: answer
      dtype: string
    - name: category
      dtype: string
    - name: clinical VQA task
      dtype: string
    - name: department
      dtype: string
    - name: perceptual granularity
      dtype: string
    - name: modality
      dtype: string
    - name: original task
      dtype: string
    - name: image_path
      dtype: string
  splits:
    - name: validation
      num_bytes: 72470811
      num_examples: 10
  download_size: 72466952
  dataset_size: 72470811
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - medical
Modalities:
  - image
  - text
arxiv:
  - https://arxiv.org/abs/2507.17539
license: creativeml-openrail-m

Fundus-MMBench

Benchmark for paper Constructing Ophthalmic MLLM for Positioning-diagnosis Collaboration Through Clinical Cognitive Chain Reasoning

🚨 Important: This benchmark is for academic research only.

Dataset Viewer Notice

🚨 The dataset viewer above only shows a preview of the first 10 rows from the dataset intending to provide a quick look at the data structure. The total number of data in the dataset is 620. To access the complete dataset, please download the full tsv file.

Introduction

Fundus-MMBench Details

The number of test samples for each task category of Fundus-MMBench is 20. It consists of 31 fine-grained tasks covering three core clinical domains: region-based object recognition (e.g., optic disc identification), disease classification (e.g., glaucoma vs. non-glaucoma diagnosis), and severity grading (e.g., diabetic retinopathy severity assessment).

Usage

You can run the evaluation on Fundus-MMBench using open-compass/VLMEvalKit. Note that Fundus-MMBench(tsv version) is not officially supported, but can be regarded as a Custom MCQ dataset.

Data Source

We would like to thank these contributions.