Fear not the god from the machine, fear the man or woman who wants their machine to be seen as a god.
There is no hubris which compares to the inventor who builds a machine and proclaims it a deity. The recent reaction, particularly in DC and Silicon Valley, to ChatGPT and similar machine learning systems confirms what I have long thought about the development of AI: that we wouldn’t recognize a self-aware machine if it stared us in the face. Moreover, not only will we not recognize it, many will take advantage of that ignorance for profit and power like the magical elixirs of the 19th century.
ChatGPT’s ability to recreate the most basic think tank talking points and synthesize (often poorly) information from a database is just the next evolution in narrow artificial intelligence. Narrow AI is the type of AI we have today and have had for some time, AI that are designed for very specific (narrow) tasks and cannot perform beyond its programmed limits. It’s not an artificial general intelligence (AGI) (an AI that can think at the capacity and compexity of a human), it’s another merger of chatbots and search engines that have been steadily evolving since the 90s. And yet, if you only read the headlines, you’d think we gave Skynet the nuclear codes because many couldn’t believe that the algorithm could do the most tedious and low-brain power aspects of their jobs for them. OpenAI and a wave of CEO and tech philosophers are calling for the regulation of AI development (in ways that almost certainly give them government-endorsed business advantages.) Suddenly the titans of Silicon Valley want to put the brakes on an industry that has lacked any sort of ethical code for the last 40 years because ChatGPT could replicate the terrible aspects of the internet they themselves built and engage with enthusiasm. You shouldn’t fear the machine, you should fear the people who built it, and the lack of public education on artificial intelligence that incites panic at the first “Hello world!” You should worry about who wants to set the rules for AI, whether our lawmakers even understand it, and whether you as the average citizen will have access to the power unlocked by machine learning, lest we suffer permanent information asymmetry. AI doesn’t need to be locked up, it need to be democratized.
In the national security world, the first response anytime someone brings up AI on the battlefield is usually Terminator. “Give the system the nuclear codes because we don’t trust people” or “give the AI command of our drone fleet” are usually the default nightmare scenarios. And don’t get me wrong, I am certain there are folks out there who see nothing wrong with that. Nothing corrupts like a blank check. But my greater concern is the mass of national security leaders, and really the general public, not understanding the wide world of AI in between Skynet and Cortana (pre-rampancy). When we talk about the deployment of narrow AI, we’re talking about aides for analysts that can manage the mountains of data we produce every day, we’re talking about the missile that can make the correct call better than a pilot, in a denied environment when it’s too costly to send in a manned system, we’re talking about cyber defense architecture that can detect and react to enemy AI infiltration and assaults on our infrastructure and communications from social media to the red phone. I’ve touched on this a bit in EX SUPRA, but the worst thing that can happen in AI development is for the United States to succumb to anti-competitive and luddite policies concerning the development of AI in warfare and decision-making. In the EX SUPRA timeline, the US passes anti-weaponized AI (WAI) legislation after policy deploy AI-enabled patrols that commit atrocities against the people they’re meant to protect. This results in a lobotomizing of every armed, smart system in the US, including for the DoD. The technology itself is neither good nor bad, it is the employment of technology that determines its morality. Machines like ChatGPT already demonstrate this but replicating the data streams it is fed. We see ChatGPT become manipulative because that’s what the data provided it for its algorithms. If you feed it puppies and unicorns, it’s gonna give you puppies and unicorns. If you feed the machine 4Chan, it’s gonna give you 4Chan. For the purpose of national security, you should worry more about an AI-assisted leader informed by a polluted data stream than you should the AI pressing the red button.
Of course, it won’t be quite that simple in the near future. Much like chemistry, physics, and other sciences, in order to achieve the great heights of scientific research and discoveries which power the future of humanity, it often means opening the door to the ugliest atrocities. Since 1945, we’ve managed to survive and thrive in a world in which the fear of nuclear Armageddon and atoms for peace coexists. I expect the development of artificial intelligence to follow a similar path. Some day, someone may create an evil artificial superintelligence (ASI) and someone may also create a benevolent ASI. We may create a machine capable of curing all disease and one that can develop new, even more lethal disease. Nature finds balance, and I suspect thinking machines will become just another character in the human story. In war we say that the character of warfare changes, but not the nature. Think of war and AI like a sports match: the rules generally remain the same, but the style by which players play the game, the techniques, and the field equipment all changes constantly.
That all being said, we’ve also had a number of close calls since 1945. But we’ve also lifted billions out of poverty, created clean energy, sent people to the moon, built the internet, and built weapons *already* capable of executing missions on their own. That’s the story of humanity: overcoming the worst adversity and destruction to move into the future. AI’s just the next chapter, not the endgame.
Naturally, there are plenty of concerns about AI, like any technology, and how that may impact the economy. Futurists will be quick to choose one of two extremes: that AI will create a paradise economy of life fulfillment or will force us all into a cyberpunk dystopia filled with despair and meaninglessness. Occam’s razor, and history, suggests the truth lies in the middle. With every new solution comes new challenges, which requires greater innovation and policy. Undoubtedly, some will lose their job to AI. Undoubtedly, that will lead to a negative political reaction (if you haven’t read Burn-In from Peter Singer and August Cole, you really should). Every industrial revolution suffers from this cycle. But new opportunities will undoubtedly emerge, as they have for all of humanity, as new generations adapt to the world they’re born into. This isn’t to say that the changes in the economy are “acceptable losses” but that this isn’t the first time we’ve encountered this conundrum, and it’s on the government to appropriately balance the needs of the workforce and economic growth. While I don’t think the policy community is ready to address it in any serious fashion, an AI bill of rights for workers should be considered sooner rather than later. Once again, just like with how we conduct ourselves on the battlefield and in law enforcement, it’s on us to act and govern in a manner befitting a mature democracy.
In summary, don’t let the panic consume you, don’t let people take advantage of you by invoking fear and ignorance as strengths, and don’t let humanity be held back today by the anxieties we’ve held in perpetua about tomorrow. Lobotomizing and unilaterally disarming our future is a great way to ensure our children won’t have one. Or at least, they won’t have a future that any of us will want to live in.
If you enjoyed this article, check out my novel, EX SUPRA. Recently nominated for a Prometheus Award for best science fiction novel, it’s the story about the war after the next war. From the first combat jump on Mars to the climate change-ravaged jungles of Southeast Asia, EX SUPRA blends the bleeding edge of technology and the bloody reality of combat. In EX SUPRA, the super soldiers are only as strong as their own wills, reality is malleable, and hope only arrives with hellfire. Follow John Petrov, a refugee turned CIA paramilitary officer, Captain Jennifer Shaw, a Green Beret consumed by bloodlust, and many more, as they face off against Chinese warbots, Russian assassins, and their own demons in the war for the future of humanity.
And if you have any suggestions for topics for future articles, please send them my way on Twitter @Iron_Man_Actual. And don’t forget to subscribe to Breaking Beijing!