This week, President Biden and General Secretary Xi are due to meet in California for bilateral talks alongside APEC. Among the headlines leading up to this meeting are the reports of an AI working group between Beijing and Washington. I have my doubts, based upon *gestures at every bilateral and multilateral agreement the CCP ignored in the last ten years* that anything substantial will come from these dialogues. But I thought I might as well put my graduate degree focus to good use and talk a bit about weaponized artificial intelligence (WAI) and US-China military competition to help frame the debate.
Earlier this year, during the last AI news blitz, I wrote that I worry far more about the creators of the machine than the machine itself. Thinking machines are like children: products of their creators and reflections of the environment in which they were trained. Reading about what leaders, and even most people my age, think about AI is a bit like watching Boomers try to define Zoomer slang out of context. So when we talk about AI on the battlefield or policy space, we can’t talk about it without the context of specific systems, system owners and enablers, and broader operations and strategy. You actually have to try to understand the underlying technology. You can’t just check a box that says “warbot bad” because you watched Terminator thirty years ago.
Full disclosure: It is the longstanding opinion of this author that lobotomizing the United States military right at the dawn of a revolution in weaponized artificial intelligence and killer robots would be one of the greatest strategic miscalculations in military history. For a democracy, the war bots are our best force multiplier just as attrition returns to the center stage of warfare. We can talk about the left and right limits of any technology, but any talk of an outright ban (even if equally enforced) would put the United States at a significant disadvantage. Systems like LRASM, NGAD, and more that sit on the border of what is popularly thought of as full autonomy are the future of our force, and they’re really our only way to match the industrial, geographic, and personnel advantage of the PLA in a conflict. They don’t replace the man or woman on the ground outright, but they reduce the attrition of our most valued asset: our people, and add cheap firepower to expensive systems. With that all being said, let’s talk about some Do’s and Don’ts for WAI policy discussions and negotiations.
Do: Take the time to learn and explicitly clarify each party’s understanding of WAI terms like “human in the loop/on the loop/off the loop” and what that would technically look like. When we think arms control we think nukes. It’s relatively easy to count and define warheads and launchers. It’s a lot harder to look at a tank from a satellite or official visit and know if there is a human somewhere in the killchain. In a way, this is somewhat reminiscent of the challenges faced by the parties of the Washington Naval Treaty…which turned out so well.
Don’t: Mirror US strategic posture and killer robot fears onto the PLA and CCP. Beijing may in fact be worried about an ASI too (because it would challenge the Party) but it may be far more concerned with the near-term balance of forces in the Pacific. After all, PLA modernization is centered around the intelligentization of its operations. They’re likely to view any attempts at arms control as an attempt to undercut China while the US still holds an advantage.
Do: Take a careful inventory of what we already have in our arsenal, and how that impacts our operations relative to our rivals. Don’t just assume that we either already have the types of weapons you’re talking about or that they’re far off and banning them won’t impact national security, the devil’s in the details (and the SAPs).
Don’t: Rely on activist groups for your sources of knowledge and perspective. Activist groups, of all stripes, very rarely see the world beyond their niche, and very rarely understand when to slow their roll. Especially on this topic, it’s more about emotion than nuanced logic.
Do: Emphasize the importance of restricting artificially intelligent systems in nuclear command, control, and communication. This is a narrow enough control that a verification regime, if we can get past some paranoia, could actually be implemented and would actually benefit all of humanity. In fact, it’s a control mechanism that we could likely extend to a multilateral treaty beyond just the US and China. Of course, it’s not just about Skynet pulling the trigger, it’s about regulating who informs the commander-in-chief during a crisis. An AI’s dataset that may inform a commander during a nuclear crisis would be a prime target for purposeful data corruption, as if it wasn’t already hard enough to get the right information in a time of crisis in a timely manner. The risk of accidental nuclear war is enough for me to gladly endorse WAI restrictions here, so long as we do it right and don’t leave loopholes.
Don’t: Argue that war machines somehow make war humane or that restricting WAI keeps war “human.” Don’t romanticize this. War is hell, and history is littered with well-intentioned inventors trying to make war more humane through deadly inventions. Nor, however, does handing a robot a machine gun somehow make the apocalypse. As a society, we’ve been reliant on AI narratives driven by hysteria and science fiction for decades with little real-world data to push back on it. That is going to change very soon, and I think you’ll see that while warbot can be a force multiplier, it’s not going to be a silver bullet for combat especially as the technology proliferates around the world. In fact, the most destructive employment of AI on the battlefield would be if we held ourselves to an arms control treaty that our rivals did not. The PRC in particular has a long track record of signing agreements and then blowing them off as soon as they get what they want. When the missiles start flying, I don’t want to be the only guy without a warbot fighting by his side.
If you enjoyed this article, check out my novel, EX SUPRA. Recently nominated for a Prometheus Award for best science fiction novel, it’s the story about the war after the next war. From the first combat jump on Mars to the climate change-ravaged jungles of Southeast Asia, EX SUPRA blends the bleeding edge of technology and the bloody reality of combat. In EX SUPRA, the super soldiers are only as strong as their own wills, reality is malleable, and hope only arrives with hellfire. Follow John Petrov, a refugee turned CIA paramilitary officer, Captain Jennifer Shaw, a Green Beret consumed by bloodlust, and many more, as they face off against Chinese warbots, Russian assassins, and their own demons in the war for the future of humanity.