Announcing the Center for Artificial General Intelligence Investigations
Artificial general intelligence (AGI) is the hottest topic in tech. Everyone from presidents and prime ministers to gig workers and influencers are discussing it. And the whole planet has a stake in how the so-called “race to AGI” plays out.
But what is AGI? Everybody seems to have an opinion but nobody has an answer.
Despite trillions of dollars in investments and all-time-high stock market valuations, none of the organizations developing advanced AI systems can tell us what AGI is, when it’ll be here, or how we’ll identify it.
We get the same dubious definition over and over: an AI model capable of performing most intelligence-based tasks a human could.
But, ultimately, the current thinking boils down to a single fundamental idea:
We’ll know it when we see it.
We disagree with this approach. Our research group thinks investigating the potential existence of a “human-level” intelligence should be conducted with more scientific rigor than picking the winner of a reality TV singing contest.
That’s why we’re forming the Center for AGI Investigations. Our team intends to bridge this massive gap in the current research by providing three distinct services in the public interest:
The development of a holistic methodology for investigating and identifying advanced artificial intelligence technologies.
Establishing research toward a holistic methodology for investigating and identifying “machine sentience” and “machine consciousness.”
Creating a standardized scientific lexicon for discussing, explaining, measuring, identifying, accepting, and debunking claims related to advanced AI that can be employed by the general public.
Simply put: determining whether we’ve created or discovered what could ostensibly be the most intelligent entity in the known universe (or not) isn't something we should leave in the hands of those who stand to make or lose trillions of dollars depending on how this race plays out.
That’s why we believe humanity needs the Center for AGI Investigations. Our mission is to develop a set of definitions and protocols through peer-reviewed research that will allow us to deploy multidisciplinary teams of trained experts as AGI first-responders.
The current approach to testing for AGI is insufficient. It involves benchmarking models against a set of test questions and comparing performance across domains. It’s a lot like testing a human student’s aptitude via standardized examinations.
But, as anyone who has ever recruited undergraduate researchers, auditioned a bass player for a punk band, or tried to predict which NFL Combine standouts will have the best career already knows, performance on the test set doesn’t necessarily correlate with performance in the real world.
We believe that the only way to scientifically determine whether an AI model exhibits signs of emergent AGI behavior is to develop an external testing protocol and a set of practical examinations.
There are thousands of organizations working to develop advanced artificial intelligence and hundreds of nonprofits dedicated to solving safety and alignment issues.
But we’re the first public interest group dedicated to using the scientific method to develop an empirical process for investigating AGI research claims and making sure that method belongs to the people. If you think that’s important, we invite you to join our cause.