cs.LG

MixLLM: LLM Quantization with Global Mixed-precision between Output-features and Highly-efficient System Design

arXiv:2412.14590v2 Announce Type: replace
Abstract: Quantization has become one of the most effective methodologies to compress LLMs into smaller size. However, the existing quantization solutions still show limitations of either non-negligible accura…