Papers
arxiv:2504.10188

Efficient Generative Model Training via Embedded Representation Warmup

Published on Apr 14
· Submitted by SempraETY on Apr 16
Authors:
,
,

Abstract

Diffusion models excel at generating high-dimensional data but fall short in training efficiency and representation quality compared to self-supervised methods. We identify a key bottleneck: the underutilization of high-quality, semantically rich representations during training notably slows down convergence. Our systematic analysis reveals a critical representation processing region -- primarily in the early layers -- where semantic and structural pattern learning takes place before generation can occur. To address this, we propose Embedded Representation Warmup (ERW), a plug-and-play framework where in the first stage we get the ERW module serves as a warmup that initializes the early layers of the diffusion model with high-quality, pretrained representations. This warmup minimizes the burden of learning representations from scratch, thereby accelerating convergence and boosting performance. Our theoretical analysis demonstrates that ERW's efficacy depends on its precise integration into specific neural network layers -- termed the representation processing region -- where the model primarily processes and transforms feature representations for later generation. We further establish that ERW not only accelerates training convergence but also enhances representation quality: empirically, our method achieves a 40times acceleration in training speed compared to REPA, the current state-of-the-art methods. Code is available at https://github.com/LINs-lab/ERW.

Community

Paper author Paper submitter
edited 6 days ago

Efficient Generative Model Training via Embedded Representation Warmup

Deyuan Liu1*·Peng Sun1,2*·Xufeng Li1,3·Tao Lin1†
1 Westlake University   2 Zhejiang University   3 Nanjing University
* These authors contributed equally.   † Corresponding author.

[arXiv] [Project Page]

Summary:
Diffusion models have made impressive progress in generating high-fidelity images. However, training them from scratch requires learning both robust semantic representations and the generative process simultaneously. Our work introduces Embedded Representation Warmup (ERW) – a plug-and-play two-phase training framework that:

  • Phase 1 – Warmup: Initializes the early layers of the diffusion model with high-quality, pretrained visual representations (e.g., from DINOv2 or other self-supervised encoders).
  • Phase 2 – Full Training: Continues with standard diffusion training while gradually reducing the alignment loss, so the model can focus on refining generation.

🔥 News

  • (🔥 New) [2025/4/15] 🔥ERW code & weights are released! 🎉 Include: Training & Inference code and Weights in HF are all released.

Acknowledgement

This code is mainly built upon REPA, LightningDiT, DiT, SiT, edm2, and RCG repositories.

BibTeX



@misc
	{liu2025efficientgenerativemodeltraining,
      title={Efficient Generative Model Training via Embedded Representation Warmup}, 
      author={Deyuan Liu and Peng Sun and Xufeng Li and Tao Lin},
      year={2025},
      eprint={2504.10188},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2504.10188}, 
}

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.10188 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.10188 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.10188 in a Space README.md to link it from this page.

Collections including this paper 7