When Reality Bends Back
AI-Generated Disinformation for Profit and War
“The first casualty in any war is the truth. On I-Day, thousands of communities across the United States fell into a darkness disguised as light. On I-Day, noon was mistaken for midnight, dawn was accused of being dusk. The altered realities that so many bent to their will started bending back.” -The Missiles on Maple Street are Fake News, EX SUPRA
This week, OpenAI launched Sora 2, their latest video generation model. It’s a simple tool: write a prompt and make custom, AI-generated video content. Critically, OpenAI paired the release of their new model with the launch of a new social media app called “Sora” which one might describe as TikTok but for generative AI. Absolutely no one should be surprised that this immediately resulted in the production of video content crafted for digital outrage. Society now has an app explicitly designed to invent the strawman your drunk uncle wants to rant about at Thanksgiving Dinner. Or for those of you who’ve been online too long, we now have a Rule 34 machine.
It’s impossible to avoid generative AI plug-ins, assistants, and viral content these days. When I opened my PDF copy of Ex Supra to pull opening quote, Adobe recommended using a generative AI summary because it was a “long document.” When I open any social media app, every other video is some sort of AI-generated content. There’s an irony that so much of corporate AI marketing focuses on saving you time and yet because of AI marketing I have to spend extra time trying to decide if the boutique clothing store, news story, or viral video has been AI generated or manipulated. No longer is fake content the exception to our feeds; it’s the default. Humanity has taken up residency inside the uncanny valley.
When I made the hardcover edition of Ex Supra, I used (at the time) novel AI-generated images for chapters and an AI-distorted version of the cover art I’d originally made myself by hand. I thought it was an interesting way to portray a book all about the dangers of AI disinformation by distorting perspectives of the very stories I was telling. In 2025, LLMs are used every day in every sector and I don’t think we’re going to ever turn back, even if your coworkers hate you for your AI workslop. (It’s me, I’m coworkers.) But it’s become clear that I was right, unfortunately, about how much society would love crafting our own realities. Why live in the real world where facts push back against your feelings when you can have the instant gratification of custom, AI-generated narratives? The algorithms of the past 10-15 years that guided us to our own safe spaces of hate, fake news, and dopamine boosts have led to this moment. The machines that reinforced the worst in us are now crafting realities today that validate the lies the algorithm told you yesterday.
The thing about crafting new realities is that they have a tendency to take on a life of their own. Any writer knows their readers can interpret a million meanings from a novel. Any artist or photographer knows a single picture speaks a thousand words and can inspire just as many emotions. Humanity has always had to deal with disinformation and bullshit, but there’s a distinct difference between the singular propaganda poster or editorial and the flood of modern digital disinformation. Legacy media and critical analysis may as well be cavalry charging across no man’s land into a hail of bullets. There’s no easy fix to this, either. Enhanced digital privacy rights and data restrictions can slow the algorithm down, but it won’t kill it.
If you want a beloved celebrity attacked by an angry mob of [insert affiliation] just press a few buttons and toss it to the masses like a grenade in an ammo shack. If you want the people to be mad today; craft the reality that’ll make their blood boil. Show them your handmade reality over lunch. Get them pissed, make them argue, drive them to create their own content to prove their point or grab some bullshit off the shelf (BOTS) from a social media site to shove in the face of their debaters. All the while a profit is made from third party data sales and AI-generated slop advertisements.
Surely the bad guys won’t abuse this, right? Well…in the last few years, researchers at Rand have verified what I predicated in Ex Supra: that, among others, the CCP is actively investigating ways to use AI-generated content to precisely target users in disinformation campaigns in peacetime and war. So yes, this isn’t just some hysteria about the evolution of technology or “kids today.” Weaponized, targeted disinformation only becomes more lethal as the disinformation becomes more convincing and we become more and more dependent as a society on Chatbots and AI to handle the first steps of critical thinking for us. If you think the opening moves of the next war or the next influence campaign by an authoritarian regime doesn’t involve flooding the zone with AI-generated bullshit, you haven’t been paying attention.
By becoming addicted to crafting our own realities rather than learning to share, we are turning ourselves into weapons to be triggered by the slightest shift in algorithmic cocoon. How confident are you that the algorithm isn’t guided to cause you harm? How confident are you that the realities you craft today aren’t tomorrow’s weapons turned against you? Do you really think you’ll be safe when reality bends back?
PS: I’ve got a new podcast: Second Breakfast w/ Jordan Schneider, Justin McIntosh, and Eric Robinson. It’s all about defense tech, policy, and warfare.
If you would like to read more about the future of US-China conflict, weaponized disinformation, or what happens next, check out my novel, EX SUPRA. It’s all about the world after the fall of Taiwan, an isolationist and hyper-partisan America, and World War III. It was nominated for a Prometheus Award for best science fiction novel and there’s a sequel in the works! Don’t forget to share and subscribe!



