Efficient Generative Model Training via Embedded Representation Warmup
Abstract
Diffusion models excel at generating high-dimensional data but fall short in training efficiency and representation quality compared to self-supervised methods. We identify a key bottleneck: the underutilization of high-quality, semantically rich representations during training notably slows down convergence. Our systematic analysis reveals a critical representation processing region -- primarily in the early layers -- where semantic and structural pattern learning takes place before generation can occur. To address this, we propose Embedded Representation Warmup (ERW), a plug-and-play framework where in the first stage we get the ERW module serves as a warmup that initializes the early layers of the diffusion model with high-quality, pretrained representations. This warmup minimizes the burden of learning representations from scratch, thereby accelerating convergence and boosting performance. Our theoretical analysis demonstrates that ERW's efficacy depends on its precise integration into specific neural network layers -- termed the representation processing region -- where the model primarily processes and transforms feature representations for later generation. We further establish that ERW not only accelerates training convergence but also enhances representation quality: empirically, our method achieves a 40times acceleration in training speed compared to REPA, the current state-of-the-art methods. Code is available at https://github.com/LINs-lab/ERW.
Community
Efficient Generative Model Training via Embedded Representation Warmup
1 Westlake University 2 Zhejiang University 3 Nanjing University
* These authors contributed equally. † Corresponding author.
[arXiv] [Project Page]
Summary:
Diffusion models have made impressive progress in generating high-fidelity images. However, training them from scratch requires learning both robust semantic representations and the generative process simultaneously. Our work introduces Embedded Representation Warmup (ERW) – a plug-and-play two-phase training framework that:
- Phase 1 – Warmup: Initializes the early layers of the diffusion model with high-quality, pretrained visual representations (e.g., from DINOv2 or other self-supervised encoders).
- Phase 2 – Full Training: Continues with standard diffusion training while gradually reducing the alignment loss, so the model can focus on refining generation.
🔥 News
- (🔥 New) [2025/4/15] 🔥ERW code & weights are released! 🎉 Include: Training & Inference code and Weights in HF are all released.
Acknowledgement
This code is mainly built upon REPA, LightningDiT, DiT, SiT, edm2, and RCG repositories.
BibTeX
@misc
{liu2025efficientgenerativemodeltraining,
title={Efficient Generative Model Training via Embedded Representation Warmup},
author={Deyuan Liu and Peng Sun and Xufeng Li and Tao Lin},
year={2025},
eprint={2504.10188},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2504.10188},
}
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SARA: Structural and Adversarial Representation Alignment for Training-efficient Diffusion Models (2025)
- USP: Unified Self-Supervised Pretraining for Image Generation and Understanding (2025)
- Deeply Supervised Flow-Based Generative Models (2025)
- Underlying Semantic Diffusion for Effective and Efficient In-Context Learning (2025)
- FlowTok: Flowing Seamlessly Across Text and Image Tokens (2025)
- AttenST: A Training-Free Attention-Driven Style Transfer Framework with Pre-Trained Diffusion Models (2025)
- MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and Quantization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper