‘Experts are far more positive about AI than the public’ — Pew Research
Maybe it’s time we found some new experts.
The average American doesn’t believe that AI will make their lives better any time soon, according to a recently published survey from Pew Research.
What’s this strange emotion? Is this what hope for the future feels like?
I’m afraid proud of Americans
To say that the latest Pew Research report on AI demonstrates a gap between what the insiders are saying and what the public is feeling would be a massive understatement.
From the report:
“Public optimism is low regarding AI’s impact on work. While 73% of AI experts surveyed say AI will have a very or somewhat positive impact on how people do their jobs over the next 20 years, that share drops to 23% among U.S. adults.”
In our humble opinion, this divide represents the share of people who believe that their lives will improve if AI products and services generate money versus those who believe that their lives will improve if they use those AI tools and services.
It ain’t rocket science, it’s a chatbot
While artificial intelligence and machine learning have revolutionized the world in much the same way that fire and electricity have (cliche`, but true) there still isn’t a viable use case for consumer-facing generative AI beyond entertainment purposes.
Without the arrival of “AGI” capable of performing demonstrably useful tasks on behalf of users, the wow-factor surrounding chatbots is going to wear off. Especially if people like us keep explaining how the trick works.
And, while the ability to generate Hollywood-level films, Nashville-quality music, and nearly-human-level art at the push of a button might seem useful at first, it’s hard to imagine a market for such creations in a world where everyone has their own button to push.
Learn more: What is artificial general intelligence? — Center for AGI Investigations
Spooky press is better than no press
At the end of the day, there are countless organizations whose bottom lines rely on the emergence of advanced artificial intelligence. And, since there’s absolutely no scientific method by which to determine whether “human-level” AI or “AGI” have emerged, we’re all beholden to whatever the general consensus ends up being.
Because of this, it makes a lot more sense (and cents) for “experts” to paint AI as a problem with a long tail. Whether it’s going to damn us or save us, everyone with skin in the game needs the whole world to agree that AGI is coming soon.
As Kevin Roose at The New York Times wrote today, the AI Futures Project, led by former OpenAI researcher Daniel Kokotajlo, recently predicted that AI will surpass human intelligence before the end of 2027.
Essentially, they spent a year thinking about what could happen and then hired someone to turn their predictions into a narrative. As best we can tell, nowhere in this narrative did they give a scientific definition for “human intelligence” nor define a threshold at which an AI model will surpass it.
Apparently it’s a lot easier to predict the future than to define the present.
Learn more: What, exactly, does an AGI investigator do? — Center for AGI Investigations
Art by Nicole Greene