AI Wargaming and the Risk of Fictional Policy
- Security and Democracy Forum

- Jul 7
- 2 min read
In 2015, the novel Ghost Fleet sent ripples through the Pentagon. Though marketed as fiction, it served as something else entirely, a kind of open-source wargame, vividly illustrating what a near-peer conflict with China and Russia might look like. It got enough right to shape serious discussions on Capitol Hill and inside the Defense Department. In hindsight, the book's impact was not accidental. It filled a gap between traditional planning and imaginative forecasting. In some ways, it helped avert the costliest kind of education: learning only after war begins.

Today, a new form of forecasting is emerging, one that promises to be faster, cheaper, and more comprehensive than any team of authors or analysts: AI-assisted wargaming. With tools capable of modeling vast datasets and simulating thousands of possible outcomes, AI promises to revolutionize defense planning. But with that promise comes a serious risk, that fictional policy, now generated by machines, could become even more persuasive, more confident, and more misleading.
Unlike human authors, large language models and reinforcement-based AI are trained to please. They produce responses that reflect the biases, assumptions, and prompting of their users. If a commander wants a model to “show” that a cyber strike deters escalation, odds are the AI will deliver just that. It may not challenge faulty assumptions. It may amplify confirmation bias. And it will do so with simulated certainty, delivered in clean charts and confident prose.
This is not a future risk. It is already happening. AI systems are being used to generate red-teaming scenarios, explore logistics chains, and model escalation ladders. Many of these systems are trained on data that is incomplete, unvetted, or based on past patterns that may not hold in future conflicts, especially those involving near-peer competitors or hybrid warfare.
The U.S. military has long used wargaming to test ideas without shedding blood. But traditional games had one built-in advantage: human friction. Participants could question the assumptions, challenge each other’s moves, and debate the logic of the scenario. AI-driven systems offer none of that unless explicitly programmed to do so, and even then, they often reflect a narrow interpretation of “rationality.”
To be clear, AI-assisted tools have value. They can help planners surface blind spots, process enormous logistical inputs, and test scenarios at speed. But they must be used with extreme care. Because the danger is not just that they’ll be wrong - it’s that they’ll feel right. They’ll feel authoritative. And they’ll give cover to decision-makers looking for a shortcut to validation rather than a path to truth.
What we need now are guardrails. Every AI-assisted wargame should be reviewed through adversarial testing, with red teams explicitly designed to poke holes in the system’s assumptions. Prompts and parameters must be transparent. Outputs should be treated as hypotheses, not predictions. And most importantly, senior leaders must remain skeptical of simulations that seem too clean, too neat, too easy.
Ghost Fleet worked because it blended imagination with insight, narrative with nuance. AI can be a tool in that same tradition, but only if we remember that machines can replicate our thinking, not replace it. The future of strategy demands more creativity, not less. More humility, not more certainty. And more fiction that helps us think, not automation that helps us forget how.




Comments