--- license: apache-2.0 language: - en pipeline_tag: image-to-3d ---
## Overview This repository contains the models of the paper [LHM: Large Animatable Human Reconstruction Model for Single Image to 3D in Seconds](https://huggingface.co/papers/2503.10625). LHM is a feed-forward model for animatable 3D human reconstruction from a single image in seconds. Trained on a large-scale video dataset with an image reconstruction loss, our model exhibits strong generalization ability to diverse real-world scenarios ## Quick Start Please refer to our [Github Repo](https://github.com/aigc3d/LHM/tree/main) ### Download Model ```python from huggingface_hub import snapshot_download # 500M-HF Model model_dir = snapshot_download(repo_id='3DAIGC/LHM-500M-HF', cache_dir='./pretrained_models/huggingface') # 500M Model model_dir = snapshot_download(repo_id='3DAIGC/LHM-500M', cache_dir='./pretrained_models/huggingface') # 1B Model model_dir = snapshot_download(repo_id='3DAIGC/LHM-1B', cache_dir='./pretrained_models/huggingface') ``` ## Citation ``` @inproceedings{qiu2025LHM, title={LHM: Large Animatable Human Reconstruction Model from a Single Image in Seconds}, author={Lingteng Qiu and Xiaodong Gu and Peihao Li and Qi Zuo and Weichao Shen and Junfei Zhang and Kejie Qiu and Weihao Yuan and Guanying Chen and Zilong Dong and Liefeng Bo }, booktitle={arXiv preprint arXiv:2503.10625}, year={2025} } ```