LocalLLaMA

When are we getting consumer inference chips?

Dumb question but I genuinely don't get it. Billions of $ poured into AI startups the last few years and nobody has shipped a consumer chip with a model built in? Like a $200 stick that runs Llama 3 at reading speed, 30W, plug into your desktop, do…