Datasets:

Languages:
English
ArXiv:
License:
The Dataset Viewer has been disabled on this dataset.

MMEB-V2 (Massive Multimodal Embedding Benchmark)

Paper Abstract

Multimodal embedding models have been crucial in enabling various downstream tasks such as semantic similarity, information retrieval, and clustering over different modalities. However, existing multimodal embeddings like VLM2Vec, E5-V, GME are predominantly focused on natural images, with limited support for other visual forms such as videos and visual documents. This restricts their applicability in real-world scenarios, including AI agents, multi-modal search and recommendation, and retrieval-augmented generation (RAG). To close this gap, we propose VLM2Vec-V2, a unified framework for learning embeddings across diverse visual forms. First, we introduce MMEB-V2, a comprehensive benchmark that extends MMEB with five new task types: visual document retrieval, video retrieval, temporal grounding, video classification and video question answering - spanning text, image, video, and visual document inputs. Next, we train VLM2Vec-V2, a general-purpose embedding model that supports text, image, video, and visual document inputs. Extensive experiments show that VLM2Vec-V2 achieves strong performance not only on the newly introduced video and document retrieval tasks, but also improves over prior baselines on the original image benchmarks. Through extensive evaluation, our study offers insights into the generalizability of various multimodal embedding models and highlights effective strategies for unified embedding learning, laying the groundwork for more scalable and adaptable representation learning in both research and real-world settings.

Building upon on our original MMEB, MMEB-V2 expands the evaluation scope to include five new tasks: four video-based tasks β€” Video Retrieval, Moment Retrieval, Video Classification, and Video Question Answering β€” and one task focused on visual documents, Visual Document Retrieval. This comprehensive suite enables robust evaluation of multimodal embedding models across static, temporal, and structured visual data settings.

This Hugging Face repository contains only the raw image and video files used in MMEB-V2, which need to be downloaded in advance. The test data for each task in MMEB-V2 is available here and will be automatically downloaded and used by our code. More details on how to set it up are provided in the following sections.

Website |Github | πŸ†Leaderboard | πŸ“–MMEB-V2/VLM2Vec-V2 Paper | | πŸ“–MMEB-V1/VLM2Vec-V1 Paper |

πŸš€ What's New

  • [2025.07] Release tech report.
  • [2025.05] Initial release of MMEB-V2/VLM2Vec-V2.

Dataset Overview

We present an overview of the MMEB-V2 dataset below: abs

Dataset Structure

The directory structure of this Hugging Face repository is shown below. For video tasks, we provide both sampled frames and raw videos (the latter will be released later). For image tasks, we provide the raw images. Files from each meta-task are zipped together, resulting in six files. For example, video_cls.tar.gz contains the sampled frames for the video classification task.


β†’ video-tasks/
β”œβ”€β”€ frames/
β”‚   β”œβ”€β”€ video_cls.tar.gz
β”‚   β”œβ”€β”€ video_qa.tar.gz
β”‚   β”œβ”€β”€ video_ret.tar.gz
β”‚   └── video_mret.tar.gz
β”œβ”€β”€ raw videos/ (To be released)

β†’ image-tasks/
β”œβ”€β”€ mmeb_v1.tar.gz
└── visdoc.tar.gz

After downloading and unzipping these files locally, you can organize them as shown below. (You may choose to use Git LFS or wget for downloading.) Then, simply specify the correct file path in the configuration file used by your code.


β†’ MMEB
β”œβ”€β”€ video-tasks/
β”‚   └── frames/
β”‚       β”œβ”€β”€ video_cls/
β”‚       β”‚   β”œβ”€β”€ UCF101/
β”‚       β”‚   β”‚   └── video_1/              # video ID
β”‚       β”‚   β”‚       β”œβ”€β”€ frame1.png        # frame from video_1
β”‚       β”‚   β”‚       β”œβ”€β”€ frame2.png
β”‚       β”‚   β”‚       └── ...
β”‚       β”‚   β”œβ”€β”€ HMDB51/
β”‚       β”‚   β”œβ”€β”€ Breakfast/
β”‚       β”‚   └── ...                       # other datasets from video classification category
β”‚       β”œβ”€β”€ video_qa/
β”‚       β”‚   └── ...                       # video QA datasets
β”‚       β”œβ”€β”€ video_ret/
β”‚       β”‚   └── ...                       # video retrieval datasets
β”‚       └── video_mret/
β”‚           └── ...                       # moment retrieval datasets
β”œβ”€β”€ image-tasks/
β”‚   β”œβ”€β”€ mmeb_v1/
β”‚   β”‚   β”œβ”€β”€ OK-VQA/
β”‚   β”‚   β”‚   β”œβ”€β”€ image1.png
β”‚   β”‚   β”‚   β”œβ”€β”€ image2.png
β”‚   β”‚   β”‚   └── ...
β”‚   β”‚   β”œβ”€β”€ ImageNet-1K/
β”‚   β”‚   └── ...                           # other datasets from MMEB-V1 category
β”‚   └── visdoc/
β”‚       └── ...                           # visual document retrieval datasets

Downloads last month
2,299

Models trained or fine-tuned on TIGER-Lab/MMEB-V2

Spaces using TIGER-Lab/MMEB-V2 2

Collection including TIGER-Lab/MMEB-V2