Cost-sensitive retraining via posterior learning debt

arXiv:2604.06438v2 Announce Type: replace-cross Abstract: Deployed prediction systems are often retrained on fixed calendars, even when model staleness and retraining burden vary over time. This short communication formulates retraining for Bayesian prediction systems as a cost-sensitive predictive-regret decision. The central monitoring state is posterior learning debt, defined as the Kullback--Leibler divergence from a reference shadow posterior to the deployed frozen posterior. In the decision layer, a retraining cost is compared with the expected one-period predictive regret of waiting. A continuous-severity version retrains when calibrated expected regret exceeds the retraining cost, while the familiar two-state excess-loss rule is a special case. The empirical study is an exact-state proof-of-concept in a synthetic conjugate simulation with warm-started deployed and shadow normal-inverse-gamma posteriors, separate update, monitoring, and evaluation batches, lagged deployment actions, expanded baseline grids, and score-unit sensitivity. Under the primary 75th-percentile score-unit scaling, an age-adjusted debt-threshold policy improves on tuned calendar retraining in all 72 non-stable scenario cells and on tuned CUSUM in 58 of 72 cells, with mean relative objectives 0.677 and 0.975, respectively. Debt-utility and hybrid-utility policies also improve strongly over tuned calendar retraining, but they do not dominate tuned CUSUM. Median and mean score-unit sensitivities show the same main calendar result, while the CUSUM comparison remains policy-dependent. The contribution is a transparent decision layer for deployed Bayesian prediction systems, not a universal replacement for drift detection.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top