SWE Context Bench: A Benchmark for Context Learning in Coding

arXiv:2602.08316v2 Announce Type: replace-cross Abstract: Large language models are increasingly used as programming agents for repository level software engineering tasks. While recent benchmarks evaluate correctness in realistic codebases, they largely treat tasks as independent and do not assess whether agents can reuse previous experience or contexts across related problems. As a result, the ability of agents to accumulate, retrieve, and apply prior experience, as well as the efficiency gains from such reuse, remains difficult to measure. We introduce SWE-ContextBench, a benchmark designed to explicitly evaluate context reuse in programming agents. Built on SWE-Bench Lite, SWE-Bench Multilingual, and SWE-Bench Verified, SWE-ContextBench consists of 1,100 base tasks with 376 related tasks derived from real dependency and reference relationships among GitHub issues and pull requests. SWE-ContextBench groups base tasks and related tasks with shared context across 51 unique repositories and 9 programming languages. The benchmark evaluates agents along three complementary dimensions: prediction accuracy, time efficiency, and cost efficiency. Using SWE-ContextBench, we study multiple context reuse settings, including oracle guided and autonomous retrieval, as well as full execution trajectories and compact summaries. Our results show that correctly selected summarized context improves resolution accuracy and substantially reduces runtime and token cost, particularly on harder tasks. In contrast, unfiltered or incorrectly selected context provides limited or negative benefits. These findings highlight the importance of context representation and retrieval quality, and position SWE-ContextBench as a principled benchmark for studying context reuse in programming agents.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top