I think most people are using AI completely wrong.
Right now everyone is using AI to generate infinite garbage:
infinite blogs
infinite tweets
infinite SEO spam
So this weekend I tried building something different.
Instead of using AI as a content generator, I used it as a research moderation system.
I built an automated pipeline for my Institute for AI Economics website that:
scans real research sources every week
pulls papers/articles from arXiv, Stanford HAI, OECD, BIS, etc.
compares themes across sources
ranks strategic relevance
generates disagreements between experts
extracts core mental models
generates deep understanding questions
auto-publishes the briefing archive
I’m starting to think the future role of humans is not “content creator.”
It’s content moderator / synthesizer / judge.
AI can now generate infinite perspectives at near-zero cost.
So the scarce thing becomes:
taste
judgment
synthesis
Basically:
AI generates.
Humans moderate.
And maybe that’s how we fight AI slop.
But by building systems that:
compare outputs
challenge outputs
rank outputs
force disagreement
synthesize competing viewpoints
That feels way more valuable than asking ChatGPT to write another “10 productivity tips” article.
Curious if others think this is the actual direction things go.
Does AI push humans toward becoming editors/moderators/curators instead of creators?
[link] [comments]