Geometry-Editable and Appearance-Preserving Object Compositon
Abstract
The Disentangled Geometry-editable and Appearance-preserving Diffusion (DGAD) model effectively integrates target objects into background scenes by using semantic embeddings for geometry and cross-attention for appearance alignment.
General object composition (GOC) aims to seamlessly integrate a target object into a background scene with desired geometric properties, while simultaneously preserving its fine-grained appearance details. Recent approaches derive semantic embeddings and integrate them into advanced diffusion models to enable geometry-editable generation. However, these highly compact embeddings encode only high-level semantic cues and inevitably discard fine-grained appearance details. We introduce a Disentangled Geometry-editable and Appearance-preserving Diffusion (DGAD) model that first leverages semantic embeddings to implicitly capture the desired geometric transformations and then employs a cross-attention retrieval mechanism to align fine-grained appearance features with the geometry-edited representation, facilitating both precise geometry editing and faithful appearance preservation in object composition. Specifically, DGAD builds on CLIP/DINO-derived and reference networks to extract semantic embeddings and appearance-preserving representations, which are then seamlessly integrated into the encoding and decoding pipelines in a disentangled manner. We first integrate the semantic embeddings into pre-trained diffusion models that exhibit strong spatial reasoning capabilities to implicitly capture object geometry, thereby facilitating flexible object manipulation and ensuring effective editability. Then, we design a dense cross-attention mechanism that leverages the implicitly learned object geometry to retrieve and spatially align appearance features with their corresponding regions, ensuring faithful appearance consistency. Extensive experiments on public benchmarks demonstrate the effectiveness of the proposed DGAD framework.
Community
Geometry-Editable and Appearance-Preserving Object Compositon
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Structure-Preserving Zero-Shot Image Editing via Stage-Wise Latent Injection in Diffusion Models (2025)
- MDE-Edit: Masked Dual-Editing for Multi-Object Image Editing via Diffusion Models (2025)
- CompleteMe: Reference-based Human Image Completion (2025)
- RelationAdapter: Learning and Transferring Visual Relation with Diffusion Transformers (2025)
- IMAGHarmony: Controllable Image Editing with Consistent Object Quantity and Layout (2025)
- HF-VTON: High-Fidelity Virtual Try-On via Consistent Geometric and Semantic Alignment (2025)
- Learning Joint ID-Textual Representation for ID-Preserving Image Synthesis (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper