ubergarm/Kimi-K2.6-GGUF Q4_X now available
Big thanks to jukofyork and AesSedai today giving me some tips to patch and quantize the "full size" Kimi-K2.6 "Q4_X". It runs on both ik and mainline llama.cpp if you have over ~584GB RAM+VRAM… I'll follow up with imatr…