Beyond Solving Math Quiz: Evaluating the Ability of Large Reasoning Models to Ask for Information
Abstract
Systematic evaluation of Large Reasoning Models on incomplete problems reveals their inability to proactively seek information, highlighting issues like overthinking and hallucination, and the challenges of supervised fine-tuning for developing genuine intelligence.
Large Reasoning Models (LRMs) have demonstrated remarkable problem-solving abilities in mathematics, as evaluated by existing benchmarks exclusively on well-defined problems. However, such evaluation setup constitutes a critical gap, since a genuine intelligent agent should not only solve problems (as a math quiz solver), but also be able~to ask for information when the problems lack sufficient information, enabling proactivity in responding users' requests. To bridge such gap, we proposes a new dataset consisting of two types of incomplete problems with diverse contexts. Based on the dataset, our systematical evaluation of LRMs reveals their inability in proactively asking for information. In addition, we uncover the behaviors related to overthinking and hallucination of LRMs, and highlight the potential and challenges of supervised fine-tuning in learning such ability. We hope to provide new insights in developing LRMs with genuine intelligence, rather than just solving problems.
Community
Systematic evaluation of Large Reasoning Models on incomplete problems reveals their inability to proactively seek information, highlighting issues like overthinking and hallucination, and the challenges of supervised fine-tuning for developing genuine intelligence.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Teaching Language Models To Gather Information Proactively (2025)
- ReliableMath: Benchmark of Reliable Mathematical Reasoning on Large Language Models (2025)
- Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning (2025)
- The Challenge of Teaching Reasoning to LLMs Without RL or Distillation (2025)
- Does Learning Mathematical Problem-Solving Generalize to Broader Reasoning? (2025)
- Learning Deliberately, Acting Intuitively: Unlocking Test-Time Reasoning in Multimodal LLMs (2025)
- Libra: Assessing and Improving Reward Model by Learning to Think (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper