Test-Time Compute for Dense Retrieval: Agentic Program Generation with Frozen Embedding Models

arXiv:2605.11374v2 Announce Type: replace Abstract: Test-time compute is widely believed to benefit only large reasoning models. We show it also helps small embedding models. Since modern embedding models are distilled from LLM backbones, a frozen encoder should benefit from extra inference compute without retraining. Using an agentic program-search loop, we explore 259 candidate inference programs over a frozen embedding API across ninety generations. The entire Pareto frontier collapses onto a single algebra: a softmax-weighted centroid of the local top-K documents interpolated with the query. This default, which introduces no trainable parameters, lifts nDCG@10 statistically significantly across seven embedding-model families spanning a tenfold parameter range, with held-out full-BEIR validation confirming the lift on every model tested.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top