Update README.md
Browse files
README.md
CHANGED
@@ -50,17 +50,19 @@ dataset_info:
|
|
50 |
dataset_size: 1911299268.0
|
51 |
---
|
52 |
|
53 |
-
#
|
|
|
54 |
|
55 |
## Dataset Description
|
56 |
|
57 |
-
- **Homepage:** [
|
58 |
- **Paper:** [SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation](https://arxiv.org/pdf/2405.08807)
|
59 |
-
|
|
|
60 |
### Dataset Summary
|
61 |
-
The SciFIBench (Scientific Figure Interpretation Benchmark) contains
|
62 |
Figure -> Caption involves selecting the most appropriate caption given a figure; Task 2: Caption -> Figure involves the opposite -- selecting the most appropriate
|
63 |
-
figure given a caption. This benchmark was curated from the SciCap
|
64 |
on each question to ensure high-quality,
|
65 |
answerable questions.
|
66 |
|
@@ -70,23 +72,34 @@ from datasets import load_dataset
|
|
70 |
|
71 |
# load dataset
|
72 |
dataset = load_dataset("jonathan-roberts1/SciFIBench") # optional: set cache_dir="PATH/TO/MY/CACHE/DIR"
|
73 |
-
#
|
74 |
-
#
|
|
|
|
|
|
|
75 |
"""
|
76 |
DatasetDict({
|
77 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
|
79 |
num_rows: 500
|
80 |
})
|
81 |
-
|
82 |
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
|
83 |
num_rows: 500
|
84 |
})
|
85 |
})
|
86 |
"""
|
87 |
|
88 |
-
# select task
|
89 |
-
|
90 |
"""
|
91 |
Dataset({
|
92 |
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
|
@@ -95,7 +108,7 @@ Dataset({
|
|
95 |
"""
|
96 |
|
97 |
# query items
|
98 |
-
|
99 |
"""
|
100 |
{'ID': 40,
|
101 |
'Question': 'Which caption best matches the image?',
|
@@ -112,7 +125,7 @@ figure2caption_dataset[40] # e.g., the 41st element
|
|
112 |
|
113 |
### Source Data
|
114 |
|
115 |
-
More information regarding the source data can be found at: https://github.com/tingyaohsu/SciCap
|
116 |
|
117 |
### Dataset Curators
|
118 |
|
|
|
50 |
dataset_size: 1911299268.0
|
51 |
---
|
52 |
|
53 |
+
# SciFIBench
|
54 |
+
NeurIPS 2024
|
55 |
|
56 |
## Dataset Description
|
57 |
|
58 |
+
- **Homepage:** [SciFIBench](https://scifibench.github.io/)
|
59 |
- **Paper:** [SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation](https://arxiv.org/pdf/2405.08807)
|
60 |
+
- **Repository** [Needle Threading](https://github.com/jonathan-roberts1/SciFIBench)
|
61 |
+
-
|
62 |
### Dataset Summary
|
63 |
+
The SciFIBench (Scientific Figure Interpretation Benchmark) contains 2000 multiple-choice scientific figure interpretation questions covering two tasks. Task 1:
|
64 |
Figure -> Caption involves selecting the most appropriate caption given a figure; Task 2: Caption -> Figure involves the opposite -- selecting the most appropriate
|
65 |
+
figure given a caption. This benchmark was curated from the SciCap and ArxivCap datasets, using adversarial filtering to obtain hard negatives. Human verification has been performed
|
66 |
on each question to ensure high-quality,
|
67 |
answerable questions.
|
68 |
|
|
|
72 |
|
73 |
# load dataset
|
74 |
dataset = load_dataset("jonathan-roberts1/SciFIBench") # optional: set cache_dir="PATH/TO/MY/CACHE/DIR"
|
75 |
+
# there are 4 dataset splits, which can be indexed separately
|
76 |
+
# cs_figure2caption_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="CS_Figure2Caption")
|
77 |
+
# cs_caption2figure_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="CS_Caption2Figure")
|
78 |
+
# general_figure2caption_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="General_Figure2Caption")
|
79 |
+
# general_caption2figure_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="General_Caption2Figure")
|
80 |
"""
|
81 |
DatasetDict({
|
82 |
+
CS_Caption2Figure: Dataset({
|
83 |
+
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
|
84 |
+
num_rows: 500
|
85 |
+
})
|
86 |
+
CS_Figure2Caption: Dataset({
|
87 |
+
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
|
88 |
+
num_rows: 500
|
89 |
+
})
|
90 |
+
General_Caption2Figure: Dataset({
|
91 |
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
|
92 |
num_rows: 500
|
93 |
})
|
94 |
+
General_Figure2Caption: Dataset({
|
95 |
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
|
96 |
num_rows: 500
|
97 |
})
|
98 |
})
|
99 |
"""
|
100 |
|
101 |
+
# select task and split
|
102 |
+
cs_figure2caption_dataset = dataset['CS_Figure2Caption']
|
103 |
"""
|
104 |
Dataset({
|
105 |
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
|
|
|
108 |
"""
|
109 |
|
110 |
# query items
|
111 |
+
cs_figure2caption_dataset[40] # e.g., the 41st element
|
112 |
"""
|
113 |
{'ID': 40,
|
114 |
'Question': 'Which caption best matches the image?',
|
|
|
125 |
|
126 |
### Source Data
|
127 |
|
128 |
+
More information regarding the source data can be found at: https://github.com/tingyaohsu/SciCap and https://mm-arxiv.github.io/.
|
129 |
|
130 |
### Dataset Curators
|
131 |
|