[LLM|car]-centric [websites|cities]

There was a recent HackerNews conversation I was part of, that I think is relevant to us here, as a present danger from current-generation LLMs:

Just so long as we don't get something that is to LLMs as car-centric urban design is to cars.

Someone suggests putting all the stuff the average person needs within 15 minutes of the average person's home, and soon after we got a conspiracy theory about 15 minute cities being soviet control gates you'll need permission to get out of.

LLMs are already capable of inventing their own conspiracy theories, and are already effective persuaders, so if we do get stuck, we're not getting un-stuck.

https://news.ycombinator.com/item?id=47838046

To add a citation for "are already effective persuaders", a meta-analysis of the persuasive power of large language models shows they're about human-level: https://www.nature.com/articles/s41598-025-30783-y

We are already seeing various people, including marketers, promote website design optimised for LLMs over humans; instead of "SEO", it is "Agent Engine Optimisation" and similar neologisms: https://en.wikipedia.org/wiki/Generative_engine_optimization

We already see LLMs engaging in self-defence: https://arxiv.org/html/2509.14260v1

The good news is that people seem to broadly hate GenAI output, at least when they notice that it is [1] . The downside is noticing is getting increasingly difficult.

Conspiracy theories about 15-minute cities are easy to find: https://en.wikipedia.org/wiki/15-minute_city#Conspiracy_theories

This is the easily defensible part. The harder to defend claim is how an LLM-centric design is bad for us carbon minds, which certainly will involve a degree of supposition and extrapolation to get beyond trivial annoyances like how SEO filled recipe websites with large quantities of irrelevant (and possibly fictional) anecdotes before you could reach the important part, or the rather higher risk of some incompetent Agentic AI tool deleting all the emails of Meta's alignment director [2] .

Like all the other risks from AI, from agent-principal problems to being out-planned to being successfully lied to, we've seen low-grade bureaucratic nightmares since the invention of bureaucracy (and Pournelle's iron law of bureaucracy [3] ), I'm expecting more of the same with AI that convinces us to keep AI-centric design even against our own interests.

Fully-automated dystopian more of the same.

Right now, this is low-grade harm; but a persuasive AI locking in the use of itself is at best like any useless aristocrat locking in their place within society: I can only see it growing until it breaks something economically. Of course, the counter-point there is "what do you think the economy is when that happens?", because this could be any point in the future, not just today's questionable replacement for junior desk jobs and no UBI, but anywhere and anything because forecasting is famously hard.

  1. Page 9: https://web.archive.org/web/20260309224829/https://pos.org/wp-content/uploads/2026/03/260072-NBC-March-2026-Poll-03-08-2026-Release.pdf ↩︎

  2. https://www.businessinsider.com/meta-ai-alignment-director-openclaw-email-deletion-2026-2?op=1 ↩︎

  3. https://en.wikipedia.org/wiki/Jerry_Pournelle#Pournelle's_iron_law_of_bureaucracy ↩︎



Discuss

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top