AGI firms ‘cannot afford to admit they made a mistake’ — Stuart Russell
Have we reached the point of no return for AI investment?
There’s a term for the exact moment an aircraft has flown too far from its point of origin to return without refueling: Bingo Fuel.
If you’re at Bingo Fuel, your only option is to continue to your destination or land. There’s no turning back.
Recent comments from world-renowned British computer scientist Stuart Russell indicates that the sector of AI developers and organizations dedicated to scaling large language models to “human-level” may have reached that point, financially speaking.
Speaking to LiveScience, Russell said:
“I think it's been apparent since soon after the release of GPT-4, the gains from scaling have been incremental and expensive. "[AI companies] have invested too much already and cannot afford to admit they made a mistake [and] be out of the market for several years when they have to repay the investors who have put in hundreds of billions of dollars. So all they can do is double down.”
The Double Down
In the global race to develop a “human-level” artificial intelligence system, ar “AGI” as it is often referred to colloquially, firms such as OpenAI and xAI are adamant that success is right around the corner.
Sam Altman, Elon Musk, and Bill Gates have all intimated or explicitly stated that human-level machines will arrive within a matter of the next few years, massive disruption to the employment market will occur within the next 10, and the trillions of dollars in infrastructure and hardware that have been developed to support this endeavor will ultimately have been worth it.
But those are just opinions. There is no scientific evidence supporting the notion that AGI or human-level AI agents can be developed using the current techniques.
Right now, the real question is whether or not LLMs can scale to something approximating human intelligence. And, according to a survey conducted by the Association for the Advancement of Artificial Intelligence, about 3 out of 4 leading experts asked don’t think that’s going to happen.
And that brings us to the ultimate question: what’s the worst that could happen if the companies working to turn LLMs into AGIs realize they’ve failed but refuse to admit it?
Read more: Center for AGI Investigations: Defining Human-Level AI
Art by Nicole Greene