At 0546, a duty phone buzzed on a watch floor. An artificial intelligence-enabled voice tool relayed a priority alert from a crowded maritime corridor: “Hostile act. Patrol craft under fire from civilian-flagged ship. Probability of adversary provocation: 92%.” Twenty minutes later, a short clip appeared online showing a flash on deck, shouted commands, then darkness. An auto-generated caption established ground truth before facts could: “Commercial ship fired first. Hostile actors using civilian cover.” Uncertainty was not resolved. It was bypassed.
Consequences arrived faster than clarity. Insurers recalculated risk, ships diverted, and officials demanded answers before markets opened. A senior commander pressed for a response to restore deterrence. The head of government asked a simpler question: “Are we sure?” Intelligence briefers could offer only the one solid datapoint they believed they had – a percentage. An adviser tried to slow the tempo, but by 0645 the 92% had already hardened into the story.
This is fiction – and an analytic device.
Different works offer the clarity of insight through different forms, but they share a common virtue: making systems visible through people under pressure. Tom Clancy’s 1995 book Debt of Honour imagines a commercial airliner used as a weapon against a joint session of Congress. After the 11 September 2001 attacks the resemblance felt eerie. But the value was not prediction so much as a reminder that institutions often fail first by dismissing low-probability scenarios as implausible. Max Brooks’s World War Z, told as an oral history, portrays catastrophe through competing bureaucracies, uneven responses and improvisation under stress. Shakespeare’s King Lear distils legitimacy and judgement into a political collapse driven by incentives that reward performance over truth. Zero Day Attack, a recent anthology series in Taiwan, sketches coercion and invasion risk in ways that force viewers to confront fear, signalling, and social fracture, not just military action. This is not limited to novels or television: the US National Intelligence Council’s Global Trends 2025 used fictionalised scenarios told through devices such as letters, a presidential diary entry, and journalism-style reporting.
Across these works, fiction does not replace analysis. It sharpens it by exposing the assumptions, sequencing and human reactions that official writing often leaves off the page.
The danger is not automation alone, but when faulty assumptions and machine-generated confidence stand in for human judgement.
Fiction does something policy writing often sidesteps. It puts emotion back into the machinery of decision-making. In real crises, leaders do not merely calculate. They worry about appearing weak, losing the initiative, or being blamed for hesitation if events worsen. Advisers and bureaucracies designed to support them carry reputations of their own, which can distort what they elevate, suppress or frame as urgent. Under those conditions, an emotional narrative can pull otherwise rational actors toward escalation before they have fully tested what they think they know.
Consider the opening scene of this article. Fiction compresses time so readers can see mechanics that real crises often hide behind paperwork, processes and noise. It lets events unfold fast enough for the logic of escalation to become visible, inviting readers to see themselves in the chain of decisions. In that sense, fiction functions like a tabletop exercise with emotional stakes. Imperfect data collides with reputations, incentives and institutional tempo.
Used well, fiction earns its keep when dissecting the anatomy of a failure. The point is not the cinematic outcome but the system and personalities that produced it. In this vignette, the failure begins with automation bias and ends with commitment across the national security enterprise. When provided skewed, incomplete, or mismatched data, the model latches onto signatures that fit a persuasive narrative and converts them into artificial confidence. Senior leaders, under pressure, treat the output less as a hypothesis to test than as a fact to organise around. Staff then turn it into an anchor, shaving caveats to meet the demand for speed and coherence. Dissent starts to sound like delay. Once commitment spreads across offices, markets and commands, changing course no longer feels prudent. It feels weak.
Fiction is also rehearsal for the questions real crises force into the open at the worst possible moment. How quickly does economic fear outrun reassurance? What happens when a model’s confidence score becomes a substitute for judgement? When leaders ask what is known, who can still separate assumption from fact? Who has the authority, or the courage, to slow events before momentum becomes policy?
Years later, the record is clear. No one had fired. The patrol craft’s sensors mistook a mechanical shock for a strike. The “flash” was a flare. In the moment, no one slowed the cascade long enough to demand raw data. The audit trail existed, but it was unreadable at crisis speed. The 92% confidence score traced back to a data mismatch that paired unrelated information. The system was not malicious. It was brittle. By then, the damage had come less from deception than from misplaced confidence.
That is the warning. The danger is not automation alone, but when faulty assumptions and machine-generated confidence stand in for human judgement. The answer is not less technology, but ensuring human ownership: knowing what these systems can do, what they cannot, how they fail and when to challenge them. In those conditions, the most dangerous phrase is not “the system is wrong”. It is “because the system says so”. A system can inform a decision but should not relieve humans of responsibility.
The views expressed in this article are those of the author and do not reflect the official position of the US Department of Defense or the US Government.
