while reading through papers and news today i came across this post/blog , claiming major architectural breakthrough , having 12M tokens context window , better than opus , gemini and other models and whopping less than 5% of the cost and it processes token 52X faster than flashattention , yep you read that number right , Fifty two times , at this point i instantly called BS and was ready to move one tbh , there is zero code , paper , api or anything to either test it out or reproduce it .
so i was thinking maybe there is a slight chance i am a complete idiot and somehow this is the next "attention is all you need" thing , what do you guys think ? i am calling bs tbh
[link] [comments]