anyone else notice labs are getting more secretive about their best models?

something shifted recently and i can't stop thinking about it. the trend used to be: new model drops, blog post goes up, everyone gets access on day one. now it feels like the most capable stuff is quietly going behind walls with "restricted access" or gated research programs, while the public-facing releases are... fine, but clearly not the frontier.

google dropped gemma 4 open-weight and it's genuinely good — MoE architecture, strong reasoning, apache license. meta's doing multimodal reasoning stuff that's impressive. but then you look at what anthropic and openai have cooking and it's like, you can tell there's a tier above what you're using, you just can't touch it.

i get why from a safety standpoint. some of this is clearly designed around defensive security applications where you don't want the capability publicly exposed. but it also creates this weird situation where the benchmarks being reported don't reflect what's actually available to most developers.

curious if others are feeling this gap widen. like, are you building on the assumption that what you have access to now is roughly representative of what exists? or are you factoring in that there's probably a ceiling you haven't seen yet?

also kind of wondering if the open-weight push from labs like zhipu and google is partly a counterplay to this — keep the ecosystem from collapsing into one or two gated gatekeepers.

submitted by /u/HarrisonAIx
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top