

REPA-E
AI & ML interests
None defined yet.
Recent Activity
REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers
Xingjian Leng1* β Β· β Jaskirat Singh1* β Β· β Yunzhong Hou1 β Β· β Zhenchang Xing2β Β· β Saining Xie3β Β· β Liang Zheng1β
1 Australian National University β 2Data61-CSIRO β 3New York University β
*Project Leadsβ
π Project Page β
π€ Models β
π Paper β
We address a fundamental question: Can latent diffusion models and their VAE tokenizer be trained end-to-end? While training both components jointly with standard diffusion loss is observed to be ineffective β often degrading final performance β we show that this limitation can be overcome using a simple representation-alignment (REPA) loss. Our proposed method, REPA-E, enables stable and effective joint training of both the VAE and the diffusion model.
REPA-E significantly accelerates training β achieving over 17Γ speedup compared to REPA and 45Γ over the vanilla training recipe. Interestingly, end-to-end tuning also improves the VAE itself: the resulting E2E-VAE provides better latent structure and serves as a drop-in replacement for existing VAEs (e.g., SD-VAE), improving convergence and generation quality across diverse LDM architectures. Our method achieves state-of-the-art FID scores on ImageNet 256Γ256: 1.26 with CFG and 1.83 without CFG.
Usage and Training
Please refer our Github Repo for detailed notes on end-to-end training and inference using REPA-E.
π Citation
@article{leng2025repae,
title={REPA-E: Unlocking VAE for End-to-End Tuning with Latent Diffusion Transformers},
author={Xingjian Leng and Jaskirat Singh and Yunzhong Hou and Zhenchang Xing and Saining Xie and Liang Zheng},
year={2025},
journal={arXiv preprint arXiv:2504.10483},
}
models
10


REPA-E/sit-repae-vavae

REPA-E/sit-repae-invae

REPA-E/sit-repae-sdvae

REPA-E/e2e-vavae

REPA-E/e2e-invae

REPA-E/e2e-sdvae

REPA-E/sdvae

REPA-E/invae
