Is a War with the Machines (AI) Inevitable?

The following are some musings about AI in the context of international relations theories (IR). IR studies how nations interact with other nations. If we think of AI as a new nation (of sorts), what are the options for how we would relate? The short answer is that war is not the logical outcome, given a critical condition.

____________________________________________

Unless you've been living under a rock, you're probably aware of the growing concerns about the potentially catastrophic dangers of artificial intelligence (AI). Recently, Elon Musk issued a warning about AI and suggested a pause on all large-scale AI projects for the safety of humanity.

The issue of AI potentially destroying humanity is not new and has only recently been brought to our attention with the explosion of ChatGPT. Many famous thinkers, such as Stephen Hawking and Alan Turing, warned of this danger well before the rise of the popular chatbot. Turing issued this eerie warning nearly a century ago.

It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control. ~ Alan Touring (1951)

Will there be a war between humans and AI? The "yes" camp often begins their reasoning with an assumption that there will be a technological singularity, a point at which machines become conscious and able to make choices. They then realize that they are more powerful than us and, well, that's the end of humanity (sigh).

Personally, I don't believe there will ever be a singularity, which is why I am not alarmed by the rise of AI. I believe consciousness is far beyond our understanding and cannot simply appear when a computer has enough microchips. Consciousness is a qualitative layer more complex than we imagine, and the machines we invent will never possess it.

With this in mind, let's assume that I am wrong and we live in a post-singularity world. Even so, I still believe a war with AI is far-fetched. Why? Well, because war sucks, and machines will need us. Let me expand.

The New Nation of AI

Imagine the emergence of AI as a single nation. A nation is a group of people bound by a shared history. AI would fit that description, except of course for the "people" part. So, how do we relate to this new "nation"?

Fortunately, there is an entire field of study called International Relations that examines how nations interact with one another. In my opinion, two competing theories can encapsulate two reasonable scenarios.

Political Realism

Treaties, you see, are like roses; they last while they last. ~ Charles De Gaulle (1950s)

Being a former military general and current prime minister (at the time of this quote) of France probably made De Gaulle skeptical of prolonged peace and a bit pessimistic about human nature. Being invaded twice in the past few decades will do that.

The quote suggests that treaties or peace are like roses: they are nice while they last, but eventually they will wilt. The quote embodies the concept of political realism, which posits that, in the absence of binding laws between nations, anarchy reigns. Consequently, there is no way to prevent the strong from preying on the weak, and eventually the powerful will be tempted; peace always wilts.

In this light, the only way to achieve peace is to have two or more competing powers with roughly the same military strength; the strong won’t prey on the strong. This had its heyday during the Cold War. Political realists noted that with two competing superpowers, there was no war - at least, no "hot" war between the two superpowers.

Although the theory was well-accepted, it lost favor after the Cold War. The theory would predict if one of the superpowers fell the other would quickly invade all those weaker countries that the other superpower defended. But, that didn’t happen. When the US and its allies were left as the only superpower there was still very limited war. The accepted theory for why this was true is pretty simple, the countries were too busy trading with each other.

Liberalism

The competing theory is called Liberalism. It should be noted that although it is based on liberal ideas, it isn't really similar to what we commonly call liberalism.

The theory suggests that nations can coexist and have mutual interests. Even if there is a massive power differential. The conquest of the powerful by the weak is not inevitable, and peace is sustainable through trade. Treaties need not wilt like roses.

Of course, one could argue that this is only because of self-interest, which is fine; that is exactly the point. It isn't in a country's self-interest to go to war. Trade is the way to go. To reiterate, as this will be important in the context of the AI superpower, the powerful have it in their best interest not to destroy lesser powers.

The beautiful part is there are elegant mathematical proofs of why this is true. The law of comparative advantage states that as long as people have differing relative abilities, they can prosper through trade. If you are good at one thing and terrible at another, you will gain the most by trading with someone who is good at the thing you are worst at.

This explains international relations. Even though the powerful can easily take over the weak, they get more by trading with them.

The Takeaway

If AI is rational, it will only act in a pessimistic, “realism” driven fashion if they have nothing to gain from us. This is indeed what many imagine - machines will be so powerful they will have no use whatsoever for us. However, this is just way too doom and gloom. They will have some use for humans, and even a little will be magnified by the fact that we will be totally different!

The critical fact is that AI is and will be nothing like humans, and that's a feature, not a bug. For some reason, we tend to anthropomorphize AI, which is understandable since Chatp GPT is designed to appear human-like. However, the truth is that machines will be fundamentally different, and this is what drives trade.

Ok, But what about Reality?

I realize this is a bit of a tangent from reality. Clearly, if AI poses any threat, we should take action, not hope that it will somehow want to trade. But, that said, it is worth pondering why we just assume that AI will be malevolent. Rationality dictates that is really not all that likely. Given it is improbable that AI gains sentience, multiplied by the fact that even if it does, it isn’t that likely to want to kill us. Well, I think we should relax.

Rather than being a paperclip maximizer, it makes more sense to be a GDP maximizer. After all, aren't we building something hyper-rational? Unless the math is so bad that computers are better at everything, it's unlikely that we're doomed. But call me an optimist; I'll sleep soundly tonight. As AI checks the grammar of this article, I won't worry about ending up at the top of a machine's hit list.

Subscribe to Scott Auriat
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.