Hi - am new to local LLm and was reading about turboquant and rotoquant. I have a locally compiled llama.cpp that is not rq or tq ready. My aim is to run qwen3.6 most accurate model that I can run on my 5060ti and 64gb ram. If I understand it correctly the new quant methods will help a lot but it seems that the its all very experimental at the moment...
is the a llama.cpp code that is up to date enough for using them? and i seen this https://huggingface.co/YTan2000/Qwen3.6-35B-A3B-TQ3_4S but not sure how to get it to work ...
[link] [comments]