DeepMind withholds research, the New Yorker jumps aboard the AGI bandwagon, and OpenAI gets more money
And more money and more money and more money…
Remember when DeepMind was the belle of the computer science ball? Its UK-based sensibilities and focus on important research into academic pursuits such as chess and Go always made it seem somewhat posh and refined.
Then Alphabet bought it. Now, DeepMind is purportedly making it harder for its researchers to publish work in order to maintain its so-called competitive advantage. More on that below.
Meanwhile, the New Yorker appears to have secured front row seats aboard the AGI bandwagon. Staff writer Joshua Rothman wonders if we’re not taking AI seriously enough after ChatGPT helped them understand the nefarious world of real estate.
From the article:
“There are still questions about how far the technology can go, and about whether, conceptually speaking, it’s really “thinking,” or being creative, or what have you. Still, in one’s mental model of the next decade or two, it’s important to see that there is no longer any scenario in which A.I. fades into irrelevance. The question is really about degrees of technological acceleration.”
The Anecdotal Experience
Rothman isn’t alone in expressing his professional opinion that AGI is evidently imminent after experiencing a personal epiphany. Just a few weeks ago, Kevin Roose joined the ranks of proud believers when he published an article explaining how his experiences “vibe coding” with a chatbot led him to realize AGI is almost here.
These stories, compelling as they are, always seem to have been written by someone who feels as though they are the “chosen one.” They’re the kid that E.T. reveals itself to, the one that gets sucked into the Neverending Story; the one who gets Gizmo the Mogwai as a birthday present.
They’re the chosen ones. Until they find out they’re actually the “double rainbow” dude. Either way, the stars have aligned for big tech when it comes to how the media is treating the idea that chatbots are on the verge of becoming artificial general intelligences.
Whether they are or not, getting the general public to believe it represents a license to print money for the companies leading the charge, and tech journalists are carrying almost all of that water for big tech.
OpenAI, for example, has received glowing coverage over its recently updated image generator and so-called “reasoning” models. So much so, in fact, that it’s raising money on top of announcements that it's raising money.
After securing a $1 trillion dollar investment promise from SoftBank for its Stargate infrastructure project last week, OpenAI just announced yesterday that it was raising $40 billion more specifically to “develop AGI.”
Per the announcement:
“Today we’re announcing new funding—$40 billion at a $300 billion post-money valuation, which enables us to push the frontiers of AI research even further, scale our compute infrastructure, and deliver increasingly powerful tools for the 500 million people who use ChatGPT every week.”
DeepMind goes even more corporate
We’re not here to criticize anyone’s methodology, but it should come as no surprise that we don’t approve when laboratories that are conducting research with ramifications for the entire human species refuse to share that research. We’re an open research group working in the public interest.
Hiding research runs contrary to our values.
It’s especially sour when the reason a lab won’t share its work is because it wants to maintain some perceived competitive corporate advantage.
According to a report from the Financial Times, DeepMind has radically altered its research policies in ways purposely meant to hinder researchers' abilities to publish and share research.
Setting aside, for the moment, the ongoing rift between academia and corporate labs, there’s also the problem that all of these big tech firms are conducting research without any real peer review.
Essentially, DeepMind and firms like it treat their research like a commodity. It’s withheld when they feel like it and shared when the benefits of seeming transparent or partnering with outsiders is beneficial.
This kind of exploitation is almost certainly a contributing factor to the rise of terms such as “thinking,” “AGI,” and “Human-level” to describe predictable chatbot behavior. When the system is so gamed that even preprint papers need to be “SEO friendly” in order to attract attention, reality could end up being whatever we’re told it is.
Meanwhile, figuring out where the “race to AGI” needle is actually pointing is an exercise in reading between the lines when big tech firms post blog updates discussing their research.
Read more: What, exactly, does an AGI investigator do? — Center for AGI Investigations
Art by Nicole Greene