I’ve noticed recently that when I google something, Gemini will come up with a fairly easy to digest little blurb on whatever topic I’m googling. It seemed useful at first, until you google anything divisive. Then it will pull up a similarly easy-reading result, that on the surface sounds like fact, but when you dive into the sources you end up seeing random Reddit comments and websites.
I’m fairly media literate so I was able to figure this out pretty quickly but I’m worried about implications for the broader population. It seems like smooth answers hide not-so-great sources.
It prioritizes source availability rather than source quality. People might see it and think it’s quality info when in reality it’s sourcing fringe studies and partisan NGOs. Or in worse cases, content specifically engineered to alter results.
It’s not just Gemini, but that’s the most recent example I’ve seen. At first it seemed like benign AI slop, but now with some of the sources I’m seeing, it looks more like a coordinated astroturfing campaign. Is there anything that can be done to fix this?
[link] [comments]