Counterarguments
This is a compilation of disagreements about AI dangers and pushing for an AI Pause.
AI is and will be really beneficial to the world
It could be, we don’t disagree with that. But it could also be dangerous , including existential risks .
Human extinction? That’s just AI companies hyping up their tech
But it’s not just AI companies saying it’s an existential threat.
- Hundreds of AI scientists signed this statement : “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
- 86% of AI scientists believe that we could lose control over AI.
- The top 3 most cited AI researchers (prof. Yoshua Bengio, prof. Geoffrey Hinton, Ilya Sutskever) all warn about existential risk from AI .
Read more about x-risk
Lose control? AI is just a piece of software, it’s designed by humans
Modern AI is not designed, it’s trained. It’s quite literally a digital brain , consisting of millions of neurons. A human designs and programs the learning algorithm, but nobody understands the AI that is grown after that. We can’t predict what they will learn to do, which is why they are called “emergent capabilities” . It took 12 months until scientists found out chat GPT-4 can autonomously hack websites . AI models are already highly unpredictable, even billion dollar companies can’t prevent their models from going crazy or explain how to make bio weapons .
Well, if it starts doing crazy things, we can just turn it off
Maybe in most cases, but a really smart AI could spread to other machines. It’s just bytes, so it’s not bound to one location.
But then it needs to be able to hack
GPT-4 already can autonomously hack websites , exploit 87% of tested vulnerabilities and beats 88% of competitive hackers . How smart do you think GPT-6 will be?
Read more about the cybersecurity risks .
An AI can’t interact with the physical world
Quite a bit of things are connected to the web. Cars, planes, drones, we now even have humanoid robots. All of these can be hacked.
And it’s not just robots and machines that can be hacked. A finance worker was tricked by an AI conference call to get $25m transferred . An AI can use other AIs to generate deepfakes. And GPT-4 is already almost twice as good at persuading people than people are .
Read more about how good the best AI models are .
Why would an AI hate humans and want to kill us?
It doesn’t have to be evil or hate humans to be dangerous to humans. We don’t hate chimpansees, but we still destroy their forests. We want palm oil, so we take their forest. We’re smarter, so chimps can’t stop us. An AI might want more compute power to be better at achieving some other goal, so it destroys our environment to build a better computer. This is called instrumental convergence, this video explains it very nicely .
The AIs that I know don’t have a will of their own - they just do what they’re asked
Even if it has no goals of its own, and it just follows order, someone is going to do something dangerous with it eventually. There even was a bot called ChaosGPT which was tasked explicitly to do as much as possible to humans. It was autonomously searching for weapons of mass-destruction on google, but it didn’t get very further than that. The thing is, the only thing that is protecting us right now is that AI isn’t very smart yet.
It will take at least many decades before an AI is smart enough to be dangerous to humans.
On Metaculus, the community prediction for (weak) AGI was 2057 just three years ago, and now it’s 2026.
In 2022, AI researchers thought it would take 17 years until AI would be able to write a New York Times bestseller. A year later, a Chinese professor won a writing contest with an AI-written book.
We don’t know how long we have, but let’s err on the side of caution.
Read more about urgency
If you ban it here, China will just build it
We’re not asking to ban it just here. We need an international pause through a treaty. The same as we have for banning CFCs, or blinding laser weapons.
Read more about our proposal
It’s impossible to slow down technology.
We can regulate it by regulating chips. Training AI models require very specialized hardware, which is only created by one company, TSMC. That company uses machines that are created by yet another company, ASML. The supply chain for AI chips is very fragile and can be regulated.
Read more about feasibility .
A Pause would be bad, because…
Some ways in which a pause could be bad and how we could prevent those scenarios are explained on this page . But if the article doesn’t cover your worries you can tell us about them here .
Nobody wants a Pause
70% of people already believe that governments should pause AI development. The popular support is already there. The next step is to let our politicians know that this is urgent.
I can’t make a difference
Yes you can! There are many ways to help, and we need all the help we can get.