Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
PY007 commited on
Commit
ba85ed9
ยท
verified ยท
1 Parent(s): 90f3407

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -0
README.md CHANGED
@@ -77,3 +77,27 @@ configs:
77
  - split: test
78
  path: data/test-*
79
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  - split: test
78
  path: data/test-*
79
  ---
80
+
81
+
82
+ <p align="center" width="100%">
83
+ <img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
84
+ </p>
85
+
86
+ # Large-scale Multi-modality Models Evaluation Suite
87
+
88
+ > Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
89
+
90
+ ๐Ÿ  [Homepage](https://lmms-lab.github.io/) | ๐Ÿ“š [Documentation](docs/README.md) | ๐Ÿค— [Huggingface Datasets](https://huggingface.co/lmms-lab)
91
+
92
+ # This Dataset
93
+
94
+ This is a formatted version of [CMMMU](https://cmmmu-benchmark.github.io/). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
95
+
96
+ ```
97
+ @article{zhang2024cmmmu,
98
+ title={CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark},
99
+ author={Ge, Zhang and Xinrun, Du and Bei, Chen and Yiming, Liang and Tongxu, Luo and Tianyu, Zheng and Kang, Zhu and Yuyang, Cheng and Chunpu, Xu and Shuyue, Guo and Haoran, Zhang and Xingwei, Qu and Junjie, Wang and Ruibin, Yuan and Yizhi, Li and Zekun, Wang and Yudong, Liu and Yu-Hsuan, Tsai and Fengji, Zhang and Chenghua, Lin and Wenhao, Huang and Wenhu, Chen and Jie, Fu},
100
+ journal={arXiv preprint arXiv:2401.20847},
101
+ year={2024},
102
+ }
103
+ ```