Papers
arxiv:2508.11252

Beyond Solving Math Quiz: Evaluating the Ability of Large Reasoning Models to Ask for Information

Published on Aug 15
· Submitted by YouchengHuang on Aug 19
Authors:
,
,
,
,

Abstract

Systematic evaluation of Large Reasoning Models on incomplete problems reveals their inability to proactively seek information, highlighting issues like overthinking and hallucination, and the challenges of supervised fine-tuning for developing genuine intelligence.

AI-generated summary

Large Reasoning Models (LRMs) have demonstrated remarkable problem-solving abilities in mathematics, as evaluated by existing benchmarks exclusively on well-defined problems. However, such evaluation setup constitutes a critical gap, since a genuine intelligent agent should not only solve problems (as a math quiz solver), but also be able~to ask for information when the problems lack sufficient information, enabling proactivity in responding users' requests. To bridge such gap, we proposes a new dataset consisting of two types of incomplete problems with diverse contexts. Based on the dataset, our systematical evaluation of LRMs reveals their inability in proactively asking for information. In addition, we uncover the behaviors related to overthinking and hallucination of LRMs, and highlight the potential and challenges of supervised fine-tuning in learning such ability. We hope to provide new insights in developing LRMs with genuine intelligence, rather than just solving problems.

Community

Paper author Paper submitter

Systematic evaluation of Large Reasoning Models on incomplete problems reveals their inability to proactively seek information, highlighting issues like overthinking and hallucination, and the challenges of supervised fine-tuning for developing genuine intelligence.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2508.11252 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2508.11252 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.11252 in a Space README.md to link it from this page.

Collections including this paper 1