Describe Anything: Detailed Localized Image and Video Captioning
Abstract
Generating detailed and accurate descriptions for specific regions in images and videos remains a fundamental challenge for vision-language models. We introduce the Describe Anything Model (DAM), a model designed for detailed localized captioning (DLC). DAM preserves both local details and global context through two key innovations: a focal prompt, which ensures high-resolution encoding of targeted regions, and a localized vision backbone, which integrates precise localization with its broader context. To tackle the scarcity of high-quality DLC data, we propose a Semi-supervised learning (SSL)-based Data Pipeline (DLC-SDP). DLC-SDP starts with existing segmentation datasets and expands to unlabeled web images using SSL. We introduce DLC-Bench, a benchmark designed to evaluate DLC without relying on reference captions. DAM sets new state-of-the-art on 7 benchmarks spanning keyword-level, phrase-level, and detailed multi-sentence localized image and video captioning.
Community
We’re excited to introduce the Describe Anything Model (DAM), a powerful MLLM that generates detailed descriptions for user-defined regions in images or videos using points, boxes, scribbles, or masks.
Huggingface Demo (super cool): https://huggingface.co/spaces/nvidia/describe-anything-model-demo
Code: https://github.com/NVlabs/describe-anything
Project Page (with a 3-minute video): https://describe-anything.github.io
Models, Datasets, and Benchmark: https://huggingface.co/collections/nvidia/describe-anything-680825bb8f5e41ff0785834c
Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper