What is artificial general intelligence?
This article discusses the following:
What artificial general intelligence (AGI) is
Why it remains theoretical
Why it’s so challenging for researchers and developers to define
What we’re doing differently
No article about AGI would be complete without the obligatory journalistic disclaimer indicating that nobody really knows what AGI is or isn’t. The current accepted definition seems to be: “a machine capable of performing any intelligence-based task a human could, given sufficient access.”
But that definition has absolutely no scientific value. Take for example the following definitions:
Ghost: An undead, spectral entity that haunts the living
Extraterrestrial organic intelligence: A sentient lifeform native to somewhere other than Earth
Free Lunch: energy, no strings attached
These literary definitions do not help us determine whether the subjects they describe exist, can exist, or will exist.
And, just like ghosts, E.T., and perpetual motion machines, artificial general intelligence (AGI) is purely theoretical. That doesn’t mean it’s not possible or that we don’t believe in it. It just means that there isn’t any scientific evidence to support the notion that a computer can exhibit the same level of intelligence as a human.
The reality is that laboratories such as the ones at Google DeepMind, OpenAI, and Anthropic aren’t in the business of developing AGI. They’re attempting to create “human-level” artificial intelligence by scaling the computer science techniques that led to the current generation of generative AI models.
If it works, whichever company gets credit will have a license to print money. And for good reason: the advent of truly “human-level” AI could launch a golden era for all of humanity. It could also represent an existential threat to our species. Either way, nothing would ever be the same again.
If it doesn’t work, and machines never reach human-level intelligence, then whatever we end up with will have to be good enough to justify all the money and resources these groups have sunk into creating AGI.
Thus, the only definition of AGI that actually matters is: the point at which the public can be convinced that machines are as intelligent as they are.
We believe that defining AGI shouldn’t be a matter of opinion. When someone says that conversing with a particular chatbot “feels” like talking to a human, and thus they “believe” AGI is right around the corner, we’re skeptical.
When developers claim to have created models that can “reason” or “think,” but they can’t explain exactly what that means, they’re not announcing a technological breakthrough. They’re expressing an opinion or sharing a theory.
It’s important to have theories and opinions if you’re a scientist. But extraordinary claims require extraordinary evidence. And, so far, none of the current claims of “human level” cognition among AI models have withstood scientific scrutiny.
As the Center for AGI Investigations, we intend to investigate claims surrounding the emergence of AGI with absolute scientific rigor. We’re not looking for so-called sparks of intelligence; we’re hoping to one day observe the underlying physics of cognition and reasoning.
Before we can get there, however, we have to pull out our industrial-sized Occam’s Razor and use it to develop a no-nonsense methodology for dealing with claims related to the emergence of AGI.
This isn’t a simple undertaking. Because AI models tend to operate in what’s called a “black box,” we can’t see what’s happening as models become more capable.
And, since we can’t observe the processes involved in a typical AI model becoming an AGI model, then we need to at least determine a demonstrable threshold for state change.
Basically, today’s claims surrounding the emergence of AGI are analogous to predicting the state of Schrodinger’s Cat without opening the box.
Worse, when it comes to AGI, we can’t even be sure if there was ever a “cat” to begin with. Despite so-called progress toward it, AGI remains theoretical. We may all be standing around staring at an empty box.
That’s why we strongly believe the first step to addressing this challenge is to teach Schrodinger’s Cat how to meow.
That might sound silly, but our thinking is this: if we can give black box AI models the minimum level of real-world agency that humans have, then we can develop practical tests for intelligence.
To that end, we’re conducting a review of the current literature and forming experiments to contribute.
We believe the next step is to develop methods by which we can empirically demonstrate when a given AI model is not capable of performing “most intelligence-based tasks that a human could.” From there, our research direction ultimately involves investigating the physical properties of intelligence.
We’ll need research partners at physics laboratories to do that.
We’re not here to benchmark models or create our own goal posts. We don’t care who “wins the race to AGI” so long as humanity doesn’t lose.
Our team is developing a practical paradigm to test for AGI that can be used to externally assess any model’s capabilities by members of the general public. All of our work will belong to the public.
Specifically, our current research explores methods to falsify claims related to the emergence of advanced AI. We’re building practical tests and developing empirical methods for determining the exact threshold at which an AI system can operatively be called an “AGI” system.
At the end of the day, there’s no combination of words a chatbot could string together that would convince us it’s as capable, intelligent, or useful as a human being. Talk is cheap. Investigation requires rigor.
Since we’re an applied science research group, our focus is on doing. And, because we’re an open group, everything we do belongs to the public.
Thus, our current project is to develop a science-based method to answer the “YES/NO” question of whether a given AI model has crossed the threshold into human-level intelligence.
Ultimately, our goal is to give any AI model that thinks it’s our intellectual equals the opportunity to prove it in the real world.
And isn’t that the best any of us can really ask for?
Art by Nicole Greene