World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning
Abstract
Recent advances in large vision-language models (LVLMs) have shown promise for embodied task planning, yet they struggle with fundamental challenges like dependency constraints and efficiency. Existing approaches either solely optimize action selection or leverage world models during inference, overlooking the benefits of learning to model the world as a way to enhance planning capabilities. We propose Dual Preference Optimization (D^2PO), a new learning framework that jointly optimizes state prediction and action selection through preference learning, enabling LVLMs to understand environment dynamics for better planning. To automatically collect trajectories and stepwise preference data without human annotation, we introduce a tree search mechanism for extensive exploration via trial-and-error. Extensive experiments on VoTa-Bench demonstrate that our D^2PO-based method significantly outperforms existing methods and GPT-4o when applied to Qwen2-VL (7B), LLaVA-1.6 (7B), and LLaMA-3.2 (11B), achieving superior task success rates with more efficient execution paths.
Community
📣 World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning
🤔 Current LVLMs struggle with grounding in embodied environments, how can we make AI agents understand the physical world like humans?
🔑 Key insight: When agents perform tasks, they need both "WHAT to do" and a mental model of "WHAT WILL HAPPEN after each action"! This internal #WorldModel is fundamental to human planning capabilities 🧠 #CognitiveAI
🌲 We also introduce a tree search mechanism to automatically collect trajectories through trial-and-error, eliminating human annotation while gathering diverse interaction experiences! This greatly improves data efficiency
Great work. When will the preference data and code be open sourced?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MPO: Boosting LLM Agents with Meta Plan Optimization (2025)
- Structured Preference Optimization for Vision-Language Long-Horizon Task Planning (2025)
- VEM: Environment-Free Exploration for Training GUI Agent with Value Environment Model (2025)
- Scaling Autonomous Agents via Automatic Reward Modeling And Planning (2025)
- QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search (2025)
- EMMOE: A Comprehensive Benchmark for Embodied Mobile Manipulation in Open Environments (2025)
- ATLaS: Agent Tuning via Learning Critical Steps (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper