Kissinger on AI: How do you see the international order in the age of AI?
July 5th, 2023

From Google AlphaGo defeating human chess players to ChatGpt generating buzz in the tech world, every advance in AI technology has touched people's nerves. There is no doubt that AI is profoundly changing our society, economy, politics and even foreign policy, and the traditional theories of the past often fail to explain the impact of all this. In "The Age of Artificial Intelligence and the Future of Humanity," renowned diplomat Kissinger, former Google CEO Schmidt, and Huttenlohr, dean of the Schwarzman School of Computing at MIT, sort out the past life of artificial intelligence from different perspectives and comprehensively discuss the various impacts its development may bring to individuals, businesses, governments, societies, and nations. Several leading thinkers believe that as AI becomes more and more capable, how to position the role of human beings will be a proposition we must think about in the long run in the time to come. The following is excerpted with permission from the publisher from The Age of Artificial Intelligence and the Future of Humanity, with deletions and subheadings added by the excerptor.

Original Author | [US] Henry Kissinger / [US] Eric Schmidt / [US] Daniel Huttenlohr

What will general artificial intelligence bring?

Do humans and AI approach the same reality from different perspectives and can complement each other's strengths and complement each other? Or will we perceive two different but partially overlapping realities: one that humans can articulate through reason, and another that AI can account for through algorithms? If the answer is the latter, then AI can perceive things that we do not yet perceive and cannot perceive - not only because we do not have enough time to reason about them the way we do, but also because they exist in a domain that our minds cannot conceptualize. The human quest to "fully understand the world" will change, and people will realize that in order to gain certain knowledge, we may need to entrust artificial intelligence to acquire it for us and report back to us. Whichever the answer, as AI becomes more comprehensive and broad in its goals, it will increasingly appear to humans as a "being" that experiences and understands the world - a being that combines tools, pets, and a mind.

This puzzle will only grow deeper as researchers approach or have already achieved general AI. As we discussed in Chapter 3, general AI will not be limited to learning and performing specific tasks; rather, by definition, general AI will be able to learn and perform a very wide range of tasks, just as humans do. Developing general AI will require enormous computing power, which may result in only a few well-funded organizations being able to create such AI. As with current AI, while general-purpose AI may be readily deployed in a decentralized manner, its application will need to be limited given its capabilities. Restrictions could be imposed on general-purpose AI by allowing only approved organizations to operate it. The question would then become: Who controls the GAI? Who will authorize its use? Is democracy still possible in a world where a few "genius" machines are manipulated by a few organizations? What would cooperation between humans and AI look like in such a scenario?

If general AI does emerge into the world, it will be a major intellectual, scientific, and strategic achievement. But even if it fails to do so, AI could bring about a revolution in human affairs as well. AI's demonstrated drive and ability to respond to the unexpected (or unanticipated) and to provide solutions sets it apart from previous technologies. If left unregulated, AI could deviate from our expectations, and thus from our intentions. The decision of whether to limit it, work with it, or submit to it will be made by more than just humans. In some cases, it will be determined by the AI itself; in others, it will depend on a variety of contributing factors. Humans may be involved in a "race to the bottom.

As AI automates processes, allows humans to explore vast amounts of data, and organizes and reconfigures the physical and social spheres, those who are first may gain a first-mover advantage. Competitive pressures may force a race to deploy general AI without sufficient time to assess the risks or simply ignore them. Ethics regarding AI are essential. Each individual decision - to limit, cooperate, or comply - may or may not have dramatic consequences, but when they converge, the impact is multiplied.

These decisions cannot be made in isolation. If humanity wants to shape the future, it needs to agree on the common principles that will guide each choice. It is true that collective action is difficult and sometimes impossible to achieve, but individual action without the guidance of a common ethic will only lead humanity as a whole into greater turmoil and upheaval. Those who design, train, and work with AI will be able to achieve goals of hitherto unattainable human scale and complexity, such as new scientific breakthroughs, new economic efficiencies, new forms of security, and new dimensions of social surveillance. And in expanding AI and its uses, those who fail to gain dominance may feel that they are being watched, studied, and acted upon by forces they do not understand, and that they did not design or choose. The operation of such forces is opaque, and in many societies this is not tolerated by traditional human actors or institutions. Designers and deployers of AI should be prepared to address these issues by first explaining to non-technical people what the AI is doing, what it "knows," and how it will do it. The dynamic and emergent nature of AI creates ambiguity in at least two ways. First, AI may work as we expect it to, but it can produce results that we cannot foresee. These outcomes may lead humans into situations that even their creators did not anticipate, just as politicians in 1914 failed to recognize that the old logic of military mobilization combined with new technologies would drag Europe into war. The deployment and application of AI without careful consideration could also have serious consequences.

These consequences can be small, such as a decision made by a self-driving car that endangers lives, or they can be extremely significant, such as a serious military conflict. Second, in some applications, AI can be unpredictable, acting in completely unexpected ways. AlphaZero, for example, has developed a style of play that humans never envisioned in thousands of years of chess history, simply by following the "win" instructions. While humans may be careful to dictate the goals of AI, as we give it more freedom, the path it takes to achieve them may surprise and even frighten us. Thus, both the goals and mandates for AI need to be carefully designed, especially in areas where its decisions could be fatal. We should neither treat AI as a being that operates automatically and without care, nor allow it to take irrevocable actions without supervision, surveillance, or direct control. Artificial intelligence is created by humans, and therefore should be regulated by humans. But one of the challenges of AI in our time is that people who have the skills and resources needed to create it do not necessarily have the philosophical perspective to understand its broader implications. Many creators of AI are primarily concerned with the applications they are trying to implement and the problems they are trying to solve: they may not stop to consider whether the solution will produce a historic revolution or how their technology will affect different populations. The age of artificial intelligence needs its own Descartes and Kant to explain what we have created and what it means for humanity.

We need to organize rational discussions and consultations involving government, universities, and private industry innovators, with the goal of establishing limits on practical action like those that govern individual and organizational action today. Artificial intelligence has attributes that are partly the same as those of currently regulated products, services, technologies, and entities, but in some important ways are different from them, lacking a fully defined conceptual and legal framework of its own. For example, the evolving, pushing new boundaries nature of AI poses a regulatory challenge: who and how it operates in the world may vary from domain to domain, evolve over time, and not always present itself in a predictable manner. The governance of people is guided by a code of ethics. Artificial intelligence requires an ethical code of its own, one that reflects not only the nature of the technology but also the challenges it poses.

Often, established principles do not apply here. In the age of faith, when a defendant in a divine judgment faces a battle verdict, the court may decide the crime, but God decides who prevails. In the Age of Reason, humans determined culpability based on the precepts of reason and convicted and imposed punishment based on concepts such as causation and mens rea. But artificial intelligence does not rely on human reason to operate, nor does it have human motivation, intent, or self-reflection. Thus, the introduction of AI will further complicate existing principles of justice applicable to humans.

When an autonomous system acts based on its own perceptions and decisions, does its creator take responsibility? Or are the actions of an AI not to be confused with those of its creator, at least not in terms of guilt by association? If AI is used to monitor for signs of criminal behavior, or to help determine whether someone is guilty, must the AI be able to "explain" how it reached its conclusions in order for human officials to credit it? Moreover, at what point in the development of technology, and in what context, should AI be subject to international consultation? This is another important topic of debate. If tried too early, the technology's development may be hindered or may be tempted to hide its capabilities; if delayed too long, it may have devastating consequences, especially in the military context. The challenge is compounded by the difficulty of devising effective verification mechanisms for a technology that is nebulous, obscure, and easily disseminated. The official negotiator is necessarily government, but there is also a need to build a platform for the voices of technologists, ethicists, the companies that create and operate AI, and others outside the field.

Subscribe to geogre
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.
More from geogre

Skeleton

Skeleton

Skeleton