Update README.md
Browse files
README.md
CHANGED
@@ -4,6 +4,42 @@ tags:
|
|
4 |
- pytorch_model_hub_mixin
|
5 |
---
|
6 |
|
7 |
-
|
8 |
-
|
9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- pytorch_model_hub_mixin
|
5 |
---
|
6 |
|
7 |
+
|
8 |
+
<div align="center">
|
9 |
+
<h1>VGGT: Visual Geometry Grounded Transformer</h1>
|
10 |
+
|
11 |
+
<a href="https://jytime.github.io/data/VGGT_CVPR25.pdf" target="_blank" rel="noopener noreferrer">
|
12 |
+
<img src="https://img.shields.io/badge/Paper-VGGT" alt="Paper PDF">
|
13 |
+
</a>
|
14 |
+
<a href="https://arxiv.org/abs/2503.11651"><img src="https://img.shields.io/badge/arXiv-2503.11651-b31b1b" alt="arXiv"></a>
|
15 |
+
<a href="https://vgg-t.github.io/"><img src="https://img.shields.io/badge/Project_Page-green" alt="Project Page"></a>
|
16 |
+
<a href='https://huggingface.co/spaces/facebook/vggt'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-blue'></a>
|
17 |
+
|
18 |
+
|
19 |
+
**[Meta AI Research](https://ai.facebook.com/research/)**; **[University of Oxford, VGG](https://www.robots.ox.ac.uk/~vgg/)**
|
20 |
+
|
21 |
+
|
22 |
+
[Jianyuan Wang](https://jytime.github.io/), [Minghao Chen](https://silent-chen.github.io/), [Nikita Karaev](https://nikitakaraevv.github.io/),
|
23 |
+
[Andrea Vedaldi](https://www.robots.ox.ac.uk/~vedaldi/), [Christian Rupprecht](https://chrirupp.github.io/), [David Novotny](https://d-novotny.github.io/)
|
24 |
+
</div>
|
25 |
+
|
26 |
+
## Overview
|
27 |
+
|
28 |
+
Visual Geometry Grounded Transformer (VGGT, CVPR 2025) is a feed-forward neural network that directly infers all key 3D attributes of a scene, including extrinsic and intrinsic camera parameters, point maps, depth maps, and 3D point tracks, **from one, a few, or hundreds of its views, within seconds**.
|
29 |
+
|
30 |
+
## Quick Start
|
31 |
+
|
32 |
+
Please refer to our [Github Repo](https://github.com/facebookresearch/vggt)
|
33 |
+
|
34 |
+
|
35 |
+
## Citation
|
36 |
+
If you find our repository useful, please consider giving it a star ⭐ and citing our paper in your work:
|
37 |
+
|
38 |
+
```bibtex
|
39 |
+
@inproceedings{wang2025vggt,
|
40 |
+
title={VGGT: Visual Geometry Grounded Transformer},
|
41 |
+
author={Wang, Jianyuan and Chen, Minghao and Karaev, Nikita and Vedaldi, Andrea and Rupprecht, Christian and Novotny, David},
|
42 |
+
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
|
43 |
+
year={2025}
|
44 |
+
}
|
45 |
+
```
|