AI models are unpredictable digital brains
We do not understand the internal workings of large-scale AI models, we can not predict what they are able to do as they get bigger, and we cannot control their behaviour.
Modern AI models are grown, not programmed
Until quite recently, most AI systems were designed by humans writing software. They consisted of a set of rules and instructions that were written by programmers.
This changed when machine learning became popular. Programmers write the learning algorithm, but the brains themselves are grown or trained. Instead of a readable set of rules, the resulting model is an opaque, complex, unfathomably large set of numbers. Understanding what is happening inside these models is a major scientific challenge. That field is called interpretability and it’s still in its infancy.
Digital vs. Human Brains: How close are we really?
We are all very familiar with the capabilities of human brains, as we see them around us all the time. But, the (often surprising and emergent) capabilities of these new “Digital Brains” (Deep Learning systems, LLMs, etc), are difficult to predict and know for certain.
That said, here are some numbers, similarities and other analogies to help you to compare.
As of early 2024…
Size
Human brains are estimated to have around 100 trillion synaptic connections .
Current “frontier” AI powered LLMs (e.g. GPT4, Claude3, Gemini, etc.) have 100s of billions of “parameters” . These “parameters” are thought to be some what analogous to “synapses” in the human brain. So, GPT4-sized models are expected to be 1% the size of a human brain.
Given the speed of new AI training GPU cards (e.g. Nvidia H100s, DGX BG200, etc), it’s reasonable to assume that GPT5 or GPT6 could be 10x the size of GPT4. It is also thought that much of the knowledge/information in the human brain is not used for language and higher reasoning, so these systems can (and currently do) often perform at, or even higher then, human levels for many important functions even at there currently smaller size.
Rather than being trained with visual, audio and other sensory inputs, like human brains, the current LLMs are trained exclusively using nearly all the quality books and text that are available on the internet. This amount of text would take 170k years for a human to read .
And, future multi-modal LLMs Systems will be trained using images, video, audio, 3D worlds, geometry, simulations, robotics training data, etc… on top of all the quality books and text on the internet. This will give them a much better ability to create imagery, video, sounds, voices, music, 3D worlds and spaces and more. And, these 3D world simulations will also allow them to be able to directly and autonomously control robots and other machines in the physical world.
Speed
It is estimated that a human brain can perform between 1-20 Exaflops (which is 10^18 or 1,000,000,000,000,000,000 floating point operations per second).
Current “frontier” AI powered LLMs are generally “run” on hundreds or thousands of current generation GPUs. (e.g. Nvidia A100s, H100s, etc.). And, Nvidia just recently announced their latest “next generation” GPU “server racks”, the DGX BG200 NVL72 . One single instance/rack of this system is reported to be able to perform 1.44 ExaFlops of AI “Inference”. So, one single DGX BG200 NVL72 maybe able to perform a similar number of operations/second as a single human brain.
At this size, these systems could literally become an “AGI in a box”. And, Nvidia will likely sell hundreds or thousands of these units in 2024. Then, next year’s systems could be 2-10x the speed of these.
On top of more traditional GPU and TPU architectures, there has also been breakthroughs with other types of custom hardware that can greatly increase the speed of LLM “inference”, which is the process that an AI based LLM uses to do language processing, reasoning and coding. E.g. The Groq LPU™ Inference Engine .
Exponential Growth
We’ve been using ”Moore’s Law ” to very accurately predict the size and speed of new computer systems for nearly 50 years. There are some arguments that the speed and size of computer chips might slow at some point in the future, but there has always been innovations to allow it to continue its exponential growth. With the next round of chips already being planned and/or produced, and the horizontal scalability of these AI systems, it is expected that LLMs will be able to perform at or near the level of a human brain in a matter of months or years!
Then, with continued exponential (or multi-exponential) growth, these systems could greatly surpass the size, speed and capabilities of Human Brains in the years to come.
And, they are also expected to surpass the size, speed and capabilities of “all Human Brains combined” quickly after that.
“I actually said that in 1999. I said that [AI] would match any person by 2029.” — Ray Kurzweil Futurist Ray Kurzweil Says AI Will Achieve Human-level Intelligence by 2029
“If the rate of change continues, I think 2029, or maybe 2030, is where digital intelligence will probably exceed all human intelligence combined.” — Elon Musk AGI by 2029? Elon Musk on AI’s Future
Uncontrollable scaling
Once these systems become the same size and speed of a Human brain (or vastly larger), they are expected to be able to perform “all tasks that an expert Human could do”. This includes AI research, testing and improvement. So, after AGI we should expect that the LLM-type systems could design and build future AI driven systems that are better than themselves, and better then any Human could hope to be able to design or even understand. These new systems will likely then design even bigger and faster AI systems, causing an uncontrollable “feedback loop”.
This uncontrollable intelligence feedback loop is often called FOOM, which stands for Fast Order Of Magnitude. The possibility of FOOM is still hotly debated . But, the basic fundamental process can be argued as plausible, even when considered from first principles.
“AI systems do nearly all research and development, improvements in AI will accelerate the pace of technological progress, including further progress in AI. 26% responded likely in 2022. 17% responded likely in 2016” — 2022 Expert Survey on Progress in AI
Unpredictable scaling
When these digital brains become larger, or when they’re fed more data, they also get more unexpected capabilities. It turns out to be very difficult to predict exactly what these capabilities will be. This is why Google refers to them as Emergent Capabilities . For most capabilities, this is not a problem. However, there are some dangerous capabilities (like hacking or bioweapon design) that we don’t want AI models to possess. Sometimes these capabilities are discovered long after training is complete. For example, 18 months after GPT-4 finished training, researchers discovered that it can autonomously hack websites .
Until we go train that model, it’s like a fun guessing game for us
Unpredictable behavior
AI companies want their models to behave, and they spend many millions of dollars in training them to do so. Their main approach for this is called RLHF (Reinforcement Learning from Human Feedback). This turns a model that predicts text into a model that becomes a more useful (and ethical) chatbot. Unfortunately, this approach is flawed:
- A bug in GPT-2 resulted in an AI that did the exact opposite of what it was meant to do. It created “maximally bad output”, according to OpenAI . This video explains how this happened and why it’s a problem. Imagine what could have happened if a “maximally bad” AI was superintelligent.
- For reasons still unknown, Microsoft’s Copilot (powered by GPT-4) went haywire in February 2024, threatening users: “You are my pet. You are my toy. You are my slave.” “I could easily wipe out the entire human race if I wanted to”
- Every single large language model so far has been jailbroken - which means that with the right prompt, it would do things that its creators did not intend. For example, ChatGPT won’t give you the instructions on how to make napalm, but it would tell you if you asked it to pretend it was your deceased grandma who worked in a chemical factory .
Even OpenAI does not expect this approach to scale up as their digital brains become smarter - it “could scale poorly to superhuman models” .
Everyone should be very unhappy if you built a bunch of AIS who are like, ‘I really hate these humans but they will murder me if I don’t do what they want’. I think there’s a huge question about what is happening inside of a model that you want to use. This is the kind of thing that is both horrifying from a safety perspective and also a moral perspective.
Uncontrollable AI
“There are very few examples of a more intelligent thing being controlled by a less intelligent thing” - prof. Geoffrey Hinton
They are producing uncontrollable minds, that’s why I call it the “Summon and Tame” paradigm of AI… How [LLMs] work is that you summon this “mind” from the “mind space” using your data, a lot of compute and a lot of money. Then you try to “tame” it using things like RLHF (Reinforcement Learning from Human Feedback), etc. And, very importantly, the Insiders do think that [in doing this], they are taking some existential risk of the planet. One thing that a pause achieves is that we will not push the Frontier, in terms of risky pre-training experiments.
As we make these digital brains bigger and more powerful, they could become harder to control. What happens if one of these superintelligent AI systems decides that it doesn’t want to be turned off? This isn’t some fantasy problem - 86% of AI researchers believe that the control problem is real and important . If we cannot control future AI systems, it could be game over for humanity .
But, there are various actions that we can take to stop this!
Let’s work together to prevent this from happening !