PACZero: PAC-Private Fine-Tuning of Language Models via Sign Quantization
arXiv:2605.06505v1 Announce Type: cross
Abstract: We introduce PACZero, a family of PAC-private zeroth-order mechanisms for fine-tuning large language models that delivers usable utility at $I(S^*; Y_{1:T})=0$. This privacy regime bounds the membershi…