In the Iran war, it looks like AI helped with operations, not strategy

Overheard, from a career diplomat whose country is not one of the key participants in the US-Iran conflict, paraphrased (probably imperfectly) from memory:

This has been a war filled with mistakes. The US underestimated Iran’s resilience, overestimated the chances of regime change, and failed to anticipate Iran’s countermoves.

Yet presumably the US made use of AI in its strategy. It looks like AI helped with operations, but was not good on strategy.

Why should this be? I see at least three reasons off the top of my head, all of which should have been obvious before the bombs were dropped.

First, strategy, more than e.g., writing requisition memos, requires a broad, deep understanding of the world, but generative AI has never had robust models of the world.

Second, strategy (especially in a situation like this) demands an ability to project beyond past data, and into new situations. Generative AI has never been very good at that.

Third, there is the well-known tendency of generative AI to be sycophantic; to tell users that their ideas were the greatest ever, even when they are not. It’s not altogether difficult to imagine certain members of the senior leadership at the Department of War succumbing to that.

AI is fine for writing memos; nobody should rely on it to plan a war or to guess its likely outcomes.

Subscribe now

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top