--- license: apache-2.0 language: - en pipeline_tag: depth-estimation new_version: prs-eth/marigold-depth-v1-1 pinned: true tags: - depth estimation - image analysis - computer vision - in-the-wild - zero-shot ---

Marigold Depth v1-0 Model Card

Image Depth diffusers Github Website arXiv Social License

NEW: Marigold Depth v1-1 Model

This is a model card for the `marigold-depth-v1-0` model for monocular depth estimation from a single image. The model is fine-tuned from the `stable-diffusion-2` [model](https://huggingface.co/stabilityai/stable-diffusion-2) as described in our [CVPR'2024 paper](https://arxiv.org/abs/2312.02145) titled "Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation". - Play with the interactive [Hugging Face Spaces demo](https://huggingface.co/spaces/prs-eth/marigold): check out how the model works with example images or upload your own. - Use it with [diffusers](https://huggingface.co/docs/diffusers/using-diffusers/marigold_usage) to compute the results with a few lines of code. - Get to the bottom of things with our [official codebase](https://github.com/prs-eth/marigold). ## Model Details - **Developed by:** [Bingxin Ke](http://www.kebingxin.com/), [Anton Obukhov](https://www.obukhov.ai/), [Shengyu Huang](https://shengyuh.github.io/), [Nando Metzger](https://nandometzger.github.io/), [Rodrigo Caye Daudt](https://rcdaudt.github.io/), [Konrad Schindler](https://scholar.google.com/citations?user=FZuNgqIAAAAJ). - **Model type:** Generative latent diffusion-based affine-invariant monocular depth estimation from a single image. - **Language:** English. - **License:** [Apache License License Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). - **Model Description:** This model can be used to generate an estimated depth map of an input image. - **Resolution**: Even though any resolution can be processed, the model inherits the base diffusion model's effective resolution of roughly **768** pixels. This means that for optimal predictions, any larger input image should be resized to make the longer side 768 pixels before feeding it into the model. - **Steps and scheduler**: This model was designed for usage with the **DDIM** scheduler and between **10 and 50** denoising steps. It is possible to obtain good predictions with just **one** step by overriding the `"timestep_spacing": "trailing"` setting in the [scheduler configuration file](scheduler/scheduler_config.json) or by adding `pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")` after the pipeline is loaded in the code before the first usage. For compatibility reasons we kept this `v1-0` model identical to the paper setting and provided a [newer v1-1 model](https://huggingface.co/prs-eth/marigold-depth-v1-1) with optimal settings for all possible step configurations. - **Outputs**: - **Affine-invariant depth map**: The predicted values are between 0 and 1, interpolating between the near and far planes of the model's choice. - **Uncertainty map**: Produced only when multiple predictions are ensembled with ensemble size larger than 2. - **Resources for more information:** [Project Website](https://marigoldmonodepth.github.io/), [Paper](https://arxiv.org/abs/2312.02145), [Code](https://github.com/prs-eth/marigold). - **Cite as:** ```bibtex @InProceedings{ke2023repurposing, title={Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation}, author={Bingxin Ke and Anton Obukhov and Shengyu Huang and Nando Metzger and Rodrigo Caye Daudt and Konrad Schindler}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2024} } ```