Take Out Your Calculators: Estimating the Real Difficulty of Question Items with LLM Student Simulations

arXiv:2601.09953v2 Announce Type: replace Abstract: Standardized math assessments require expensive human pilot studies to establish the difficulty of test items. We investigate the predictive value of open-source large language models (LLMs) for evaluating the difficulty of multiple-choice math questions for real-world students. We show that, while LLMs are poor direct judges of problem difficulty, simulation-based approaches with LLMs yield promising results under the right conditions. Under the proposed approach, we simulate a ``classroom'' of 4th, 8th, or 12th-grade students by prompting the LLM to role-play students of varying proficiency levels. We use the outcomes of these simulations to fit Item Response Theory (IRT) models, comparing learned difficulty parameters for items to their real-world difficulties, as determined by item-level statistics furnished by the National Assessment of Educational Progress (NAEP). We observe correlations as high as 0.75, 0.76, and 0.82 for grades 4, 8, and 12, respectively, on the item-level correctness rates. In our simulations, we experiment on math MCQs with different ``classroom sizes,'' showing tradeoffs between computation size and accuracy. We find that role-plays with diverse-named students improve predictions (compared to student IDs), and stratifying names across gender and race further improves predictions. Our results show that LLMs with relatively weaker mathematical abilities (Gemma) actually yield better real-world difficulty predictions than mathematically stronger models (Llama and Qwen), further underscoring the suitability of these models for the task.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top