Top 10 Most compelling arguments against Superintelligent AI

This is perhaps the most important blog I am writing in a decade. Recently Dr. Roman Yampolskiy, one of the oldest AI safety researchers, formerly an academic at University of London, came on the DOAC show, where he claimed that he had himself helped coin the term “AI Safety”, gave one of the most compelling arguments against Superintelligent AI that I have come across since I began my research in AI Safety.

These arguments try to give a clearer picture and answer to some of the most pressing questions like-

Should we build a Superintelligent AI?

Is it possible to ever control it?

Can laws help us stop the risks of AGI and Superintelligence?

Is releasing AGI even ethical?

So, Here are the Top 10 arguments that he makes against Superintelligent AI that seeks to answer the above questions:

1. Is it possible to control Superintelligent AI?

Dr Roman Yampolskiy says: “First 5 years at least I was working on solving this problem (of AI safety). I was convinced we could make this happen, we could make Safe AI ... But the more, I looked at it the more I realized every single equation of that component is not something we can actually do. And the more you zoom in - it's like a fractal- you go in and you find 10 more problems… and then 100 more problems. And all of them are not just difficult, they are impossible to solve.”

“Creating a perfect safety for Superintelligence, perpetual Safety as it keeps improving, modifying, interacting with people ... you're never gonna get there. It's impossible. There is a big difference between difficult problems in computer science np complete problem- and impossible problems (np hard problems) “

“And I think control indefinite control of superintelligence is such a problem…. once we establish something is impossible, fewer people will waste that time claiming they can do it and find looking for money.”

"If we know that it's impossible to make it right to make it safe then this direct path of just build it as soon as you can becomes suicide mission. Hopefully fewer people will pursue that."

Dr. Yampolskiy challenges anyone who claims that Superintelligent AI is indefinitely controllable to prove it to him mathematically or publish a peer-reviewed paper with proofs on the same topic.

2. What about the current state of AI Safety? Isn’t it working?

He says- “There are little fixes that we put in place and quickly people find ways to work around them (instead of seminal work where we solve a problem and don't have to worry about it anymore). They (people) jailbreak whatever Safety mechanisms we have ... while progress in AI capabilities is exponential, or maybe even hyper- exponential, progress in AI Safety is linear of constant. The gap is increasing."

Given how the AI safety (internal teams) of frontier AI research labs are progressing, they often resort to patches on these AI safety problems or trying to suppress the problem. But I agree that these very problems could resurface if a hacker comes up with some workaround or bypasses these safety guardrails via jailbreaking or many shot techniques.

3. Shouldn’t we be able to shutdown AGI or Superintelligent AI if needed?

Dr. Yampolskiy adds- “It's so silly like-can you for off a virus? You have a computer virus, you don't like it. Turn it off! How about Bitcoin? Turn off Bitcoin network. Go ahead, I'll wait ... This is silly! There is a distributed system, you cannot turn them off. And on top of that, they are smarter than you. They can make multiple copies of themselves ... they can then turn you off before you turn them off”.

It’s of course, nearly impossible to shutdown Bitcoin or decentralized network. An AI if it decides to spread and copy itself on multiple computes on the decentralized network, which is getting easier day by day where AI agents could hold Bitcoin or any crypto currency and use it to buy compute power on decentralized networks like Filecoin or any compatible network can simply be catastrophic. To shutdown decentralized networks like Bitcoin would mean shutting down internet itself. Even that doesn’t guarantee that it would shutdown Bitcoin because people could use other means to connect nodes with each other (bluetooth? LoRaWan? ) and keep mining Bitcoin until most of the nodes come in contact with each other and come on a consensus of the longest chain of the blockchain.

4. If US does not make Superintelligent AI, authoritarian countries like China will make it and we will be doomed. So it is better we make it first. Right?

“Whoever has advanced AI more has advanced military- no question about that .... but the moment you switch to Superintelligence - uncontrolled Superintelligence - it doesn't matter who builds them - us or them. And if they understand this argument they would also would not build it.”

An uncontrolled Superintelligence would get out of control and it would really not matter if we build it or they build it. It would inveitably be bad for both the actors. So there is no point in “fearing that other country might pursue it and that’s why we must build it first because we are (more) democratic” as it is still a foolish and irrational argument.

5. Would there still be jobs post AGI and Superintelligence?

Dr. Yampolskiy says that 99% of the jobs would get replaced by AI, a thing that was claimed even by Nobel prize winner Geoffrey Hinton. He goes a step further and says that even plumbing jobs would get automated, unlike Geoffrey Hinton who said that plumbing job would also remain for humans in future!

Not just that, he adds- “Ray Kurzweil predicts that (2045) that's the year for the singularity. That's the year when progress becomes so fast… (that) this AI doing science and engineering work makes improvements so quickly we cannot keep up anymore. That's the definition of singularity- Point beyond which we cannot see, understand and predict what is happening in the world (and) the technologies being developed.”

6. Can Superintelligence really kills us?

He says- “It's the superintelligence that .. can come up with completely novel ways of doing it .... Your dog cannot understand all the ways you can take it out. It can maybe think you'll bite it to death or something. But that's all. Whereas you have infinite supply of resources. So if I asked your dog exactly how you are going to take it out, it would not give you a meaningful answer. It can talk about biting. And this is what we know. We know viruses. We experienced viruses. We can talk about them. But what an AI System capable of doing novel physics research can come up with is beyond me. "

Geoffrey Hinton too says the same thing, it is pointless to debate on which method can the Superintelligent AI can use to kill us all humanity. All we need to know is that it can kills us. And that is all we need to know to stop building AGI and General Superintelligence. Eliezer Yudkowsky even wrote a book on the same, titled- “If anyone builds it, Everyone Dies“.

In addition to this, Dr. Roman Jampolsky says the same thing I predicted in my version of AI 2027- someone will use AI to unleash some diabolical virus or bio weapon that gets to is much before superintelligence would, which you can read here which happens in late-2028 in my version of the scenario, which also got comments and attention from Ex-OpenAI researcher Daniel Kokotajlo and co-author of AI 2027.

7. How can we convince AI scientists and companies to stop building General Superintelligence?

He says- “If… (you) truly understand the argument ... that you will be dead- no amount of money will be useful to you. Then incentives switch. They AI Scientists would not want to be dead. I think they would be better off not building super intelligence, (instead) concentrating on narrow AI tools for solving (a) specific problem. (For example) my company cures breast cancer. That's all. We make billions of dollars, everyone is happy, Everyone benefits- It's a win."

"If people realize that doing this thing is really bad for them personally, they will not do it do our. So our job is to convince everyone with any power in this space - basically creating this technology, creating for this company… they are doing something bad, for them (selves)…. Forget about 8 billion people you are experimenting on without permission… You will not be happy with the outcome. If we get everyone to understand that its a default… that’s not just me saying it Geoffrey Hinton, Nobel prize winner , founder of a whole machine learning space. He says the game thing - Bengio-dozens of others, top scholars - we heard about statement dangers of AI statement signed by thousands of scholars, computer scientists. This is what we think right now and we need to make it a universal (consensus). No one should disagree with it this. And then we may actually make good decisions about what technology to build. It does not guarantee long-term safety of humanity but it means we're not trying to get there as soon as possible to the worst possible outcome.”

8. How about if we make strict laws and fine making risky AI systems and Superintelligence?

"I don't think making it illegal is sufficient. There are different jurisdictions. There is you know loopholes and what are you going to do if somebody does it? You're going to fine them for destroying humanity? Like very steep fines for it? What is you gonna do? It's not enforceable. If they do create it, now the Superintelligence is in charge, so the judicial system is not impactful. And all the punishments we have are designed for punishing humans- prisons, capital punishment, doesn't apply to AI.”

This is quite a sound and compelling argument. Companies like Meta have been seen breaking and bypassing laws and becoming bigger than ever… they have been fined before, and given the legal structure of corporations and LLC, only these companies can be fined and not the people behind them (limited liability). So they continue their harmful Silicon Valley philosophy of “Build fast, break fast.“

Gary Marcus who testified in the Senate along side (or opposite to be precise) OpenAI’s Sam Altman seems to have some novel and practical ideas in his book titled “Taming Silicon Valley“.

9. But I love AI? How can I not pursue advancing AI?

“I'm a scientist I'm an engineer I love AI. I love technology. I use it all the time. Build useful tools. Stop building agents. Build narrow Super intelligence. Not a general one. I am not saying you shouldn't make billions of dollars. I love billions of dollars. But don't kill everyone, yourself included. " - Dr. Yampolskiy says.

Narrow superintelligence means an AI system that is far more capable than humans but in the subset of only one field such as AlphaFold which is already better than humans at understanding proteins.

But still one must understand that even narrow AIs could be potentially catastrophic and harmful, due to its dual-use risk. Meaning an AI, for example, that is used to craft medicines and understand protein structures so well, could be fine tuned and be used instead to make viruses and bioweapons.

10. Do AI companies have the right to build Superintelligence for us?

“Someone told me that if there was a 1% chance that if I got in a car; I might not be alive, I would not get in the car. If you told me there was a 1%. chance that if I drank whatever liquid is in this cup right now I might die. I would not drink the liquid! Even if there was a billion dollars ... I won't drink it” - Steven Bartlett says, but Dr. Yampolskiy interrupts him saying-

"- It's worse than that. Not just you die, Everyone dies! ... Now, would we let you drink it at any odds? That's for us to decide (not you). You don't get to make that choice for us. To get Consent from human subjects of you need them to comprehend what they are consenting to. If these systems are unexplainable, unpredictable, (so) how can they consent? They don't know what they were consenting to. So it's impossible to get consent by definition. So, this experiment can never be run ethically. By definition they are doing unethical experimentation on human subjects."

So Can we do anything if we can do anything?

"Let's make sure we stay in control. Let's make sure we only build things which are beneficial to us. Let's make Sure people who are making these decisions are remotely qualified to do it. (Let’s make sure) They are good not just at science, engineering and business but also have moral and ethical standards. And ... if you're doing something which impacts other people, you should ask their permission before you do that”

“It's not over until it's not over. We can still choose not to build general Superintellgence"

Please. Let us not build general Superintelligence. And be super careful with narrow Superintelligence if we build it because of its dual-use risks. For the sake of your yourself, your kids and other people’s kids. For the sake of not just humanity, but morality itself.

If you are interested to learn and get started with AI safety, please checkout the resources here.

“Limit AI, Liberate Humanity” - (That’s the motto of my startup initiative anyways : ) )

Note: Link to Lesswrong article with comments:

Subscribe to Shanzson
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.