The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
HRScene - High Resolution Image Understanding
HRScene - High Resolution Image Understanding
🌐 Homepage | 🤗 Dataset | 📖 arXiv | GitHub
⭐ About HRScene
We introduce HRScene, a novel unified benchmark for HRI understanding with rich scenes. HRScene incorporates 25 real-world datasets and 2 synthetic diagnostic datasets with resolutions ranging from 1,024 × 1,024 to 35,503 × 26,627. HRScene is collected and re-annotated by 10 graduate-level annotators, covering 25 scenarios, ranging from microscopic and radiology images to street views, long-range pictures, and telescope images. It includes high-resolution images of real-world objects, scanned documents, and composite multi-image.

Some examples of HRScene. Blue ones are diagnostic datasets, and purple ones are real-world datasets.
HRScene consists of 7,073 samples, divided into three splits:
Val contains 750 samples. These samples are identical to human-annotated ones, designed for fine-grained validation of the users' VLM settings.
Testmini comprises 1,000 samples, picked from each HRScene real-world dataset, intended for rapid model development evaluation or for those with limited computing resources.
Test features the remaining 5,323 samples for standard evaluation. Notably, the answer labels for test will not be publicly released to facilitate fair evaluation. Instead, we maintain an online evaluation platform for user submissions.
📖 Dataset Usage
Data Downloading
By using our pipeline, you dont need to download the dataset manually.
For whitebackground and complexgrid, you only need to set the dataset_name
for the tester like we did in the Diagnosis/example.py.
tester = DiagnosisTester(model=model, dataset_name="complexgrid_3x3", num_samples=150)
For realworld, you need to set the dataset_name
and split
for the tester like we did in the RealWorld/example.py.
tester = RealWorldTester(model=model, dataset_name="realworld_combined", split="test")
Or you wanna download the dataset manually, you can use the following code:
from datasets import load_dataset
# for whitebackground and complexgrid, we only have 'test' split
dataset = load_dataset("Wenliang04/HRScene", "whitebackground_1x1")
for sample in dataset['test']:
print(sample)
# for realworld, we have 'testmini', 'validation', 'test' splits
dataset = load_dataset("Wenliang04/HRScene", "realworld_combined")
for sample in dataset['test']:
print(sample)
Data Format
WhiteBackground
id: int, image: PIL.JpegImagePlugin.JpegImageFile, question: str, answer: list[str]
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=448x448 at 0x7F01D88BF7A0>, 'id': 0, 'question': 'Is it daytime?', 'answer': ['no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no']}
ComplexGrid
id: str, image: PIL.JpegImagePlugin.JpegImageFile, caption: str, answer: str
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1464x1524 at 0x7FB8634E6B70>, 'id': '0_0_0', 'caption': 'A nice living room has chairs and a love seat.', 'answer': 'row: 1, col: 1'}
RealWorld
id: int, image: PIL.Image.Image, question: str, answer: str
{'id': 0, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=5760x1200 at 0x7F4994CB75F0>, 'question': 'What is motion of the pedestrian wearing blue top on the left?\n(A) crossing the crosswalk\n(B) standing\n(C) jaywalking (illegally crossing not at pedestrian crossing)\n(D) walking on the sidewalk\n(E) The image does not feature the object', 'answer': 'None'}
🏆 Leaderboard 🏆
Leaderboard on the RealWorld Task on the Test Split (Show Top 5 Only)
# | Model | Art | Daily | Medical | Paper | Remote | Research | Sub-Img | Urban | Average |
---|---|---|---|---|---|---|---|---|---|---|
1 | Qwen2-72B-Instruct | 72.7 | 64.3 | 45.9 | 76.5 | 54.9 | 46.8 | 79.3 | 46.2 | 62.2 |
2 | gemini-2-flash | 74.3 | 59.4 | 57.1 | 75.3 | 56.1 | 41.9 | 73.2 | 40.2 | 60.3 |
3 | InternVL2-40B | 70.2 | 62.8 | 35.4 | 67.6 | 50.3 | 51.4 | 77.2 | 41.0 | 58.1 |
4 | Qwen2-VL-7B-Instruct | 71.0 | 61.4 | 48.5 | 62.9 | 55.6 | 46.0 | 79.5 | 34.4 | 57.7 |
5 | Llava-OneVision-72B | 65.1 | 64.3 | 49.8 | 65.0 | 48.0 | 55.6 | 63.7 | 41.1 | 56.9 |
We provide a simple pipeline for automatic model prediction and submission file geneartion! You can find pipeline from our Github under "🔮 Evaluations on HRScene for RealWorld Task" section
✅ Cite
@article{zhang2025hrscene,
title={HRScene: How Far Are VLMs from Effective High-Resolution Image Understanding?},
author={Zhang, Yusen and Zheng, Wenliang and Madasu, Aashrith and Shi, Peng and Kamoi, Ryo and Zhou, Hao and Zou, Zhuoyang and Zhao, Shu and Das, Sarkar Snigdha Sarathi and Gupta, Vipul and Lu, Xiaoxin and Zhang, Nan and Zhang, Ranran Haoran and Iyer, Avitej and Lou, Renze and Yin, Wenpeng and Zhang, Rui},
journal={arXiv preprint},
year={2025}
}
- Downloads last month
- 686