Metacognitive Behavioral Tuning of Large Language Models for Multi-Hop Question Answering

arXiv:2602.22508v2 Announce Type: replace Abstract: Large Language Models (LLMs) often produce incorrect answers on multi-hop question answering even when the reasoning trace already contains a correct intermediate conclusion. We attribute this gap to weak self-regulation rather than insufficient reasoning capacity. Without explicit regulation, valid intermediate conclusions are overridden by continued exploration or left unrecognized as logically sufficient. We propose Metacognitive Behavioral Tuning (MBT), a post-training framework that injects a five-phase metacognitive structure into reasoning traces. The five phases are understanding and filtering, planning, execution and monitoring, self-correction, and verification. MBT has two formulations. MBT-S synthesizes new metacognitive traces from scratch, while MBT-R rewrites the student's own traces into a metacognitive form. Across HotpotQA, MuSiQue, and 2WikiMultiHopQA, MBT attains the highest Accuracy-Efficiency Score (AES) across model scales. MBT lifts task accuracy while keeping traces short and stable, with mean response length on MuSiQue an order of magnitude shorter than baseline methods and degeneration counts reduced by a similar margin. A matched-control study further confirms that the gain stems from the five-phase structural prior itself. To qualitatively assess the regulatory behavior of reasoning traces, we introduce two new metrics, the Reach-Redundancy Profile (RRP) and the length-aware Metacognitive Quality Index (MQI). RRP captures when the answer is reached and how much of the trace is redundant, and MQI quantifies how richly the five phases appear. Under both metrics, MBT achieves the earliest answer arrival, the lowest redundancy, and the richest phase-level behavior across model scales.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top