ZKBoost: Zero-Knowledge Verifiable Training for XGBoost
arXiv:2602.04113v3 Announce Type: replace-cross
Abstract: Gradient boosted decision trees, particularly XGBoost, are among the most effective methods for tabular data. As deployment in sensitive settings increases, cryptographic guarantees of model integrity become essential. We present ZKBoost, the first zero-knowledge proof of training (zkPoT) protocol for XGBoost, enabling model owners to prove correct training on a committed dataset without revealing data or model parameters. Naively re-executing XGBoost training in ZK would incur prohibitive costs, primarily due to the oblivious partitioning of training samples and unknown tree splits. Moreover, previous work on ZKP of training and inference had subtle security issues, such as leakage of tree topology and soundness gaps allowing cheating model providers to deviate from the correct execution of training and inference. We make two key contributions to address these challenges: (1) a generic zkPoT template for XGBoost that can be instantiated with any general-purpose ZKP backend, significantly improving prover costs compared to naive re-execution of the training process; and (2) a VOLE-based instantiation that overcomes the security issues of previous ZK proofs of training at minimal costs. To maximize efficiency, we develop a fixed-point version of XGBoost, which is particularly well suited for efficient instantiation of ZKP, and show it matches standard XGBoost accuracy to within 1\% on real-world datasets.