Tony Blair’s think tank wants to give UK IP to US AI and LLMs ace UCSD’s Turing Test
Imitation isn’t just flattery anymore. Now it’s the mark of intelligence.
Researchers at University of California at San Diego have declared the Turing Test defeated! In a five minute, real-time conversation with humans and chatbots, nearly three out of four humans mistakenly identified the AI chat partner as a fellow human.
What does this mean for humanity? Are ChatGPT and Claude finally as intelligent as a human being?
No. Stop being ridiculous
Let’s all stop hyperventilating about the Turing Test for a moment and pivot to some other news. We’ll get back to ChatGPT and Claude below.
Meanwhile, Tony Blair’s think tank, the aptly named “Tony Blair Institute," recently announced that it had taken the position that the United Kingdom should allow AI firms and organizations to train on copyrighted data.
Per the group:
“AI outputs should not be allowed to reproduce original works without proper licence and remuneration. But prohibiting AI models from training on publicly available data would be misguided and impractical. The free flow of information has been a key principle of the open web since its inception. To argue that commercial AI models cannot learn from open content on the web would be close to arguing that knowledge workers cannot profit from insights they get when reading the same content.”
It sounds a lot like TBI is trying to argue that anything posted to the web is considered open information. They’re framing the argument around copyrighted material use as if we’re talking about OpenAI using public domain Sherlock Holmes novels to teach ChatGPT how to write like a human.
But the reality is that no government on the planet is trying to stop AI devs from feeding non-copyrighted works to models. And it’s a strawman analogy to say that training an AI on the Beatles’ music is no different than a human songwriter being inspired by their work.
The better analogy here might be to say, for example, that training an AI on the Beatles’ music without permission is like tricking Paul McCartney into signing a contract by asking him for an autograph on a sheet of paper.
So, how do you compensate artists for work that was “confiscated” in a wide-net sweep of the “open web?” The short answer is that you don’t. Because you can’t.
Royalties are out of the question. The term “black box AI” was invented to explain this exact conundrum.
Let’s imagine that there are 10 billion files in a dataset. It would be impossible for humans to check each file (because we only live about 2.5 billion seconds). So, devs just jam it all in the machine and start training. They don’t know what’s in the sets.
And, once trained, what are we going to do? Ask ChatGPT to give us a list of all 10 billion files? Ask it to list all the songs it was trained on? We could. But how could we be sure it didn’t hallucinate, lie, or otherwise output nonsense that just appears to be human-level?
Ultimately, it doesn’t matter whether a developer meant to use copyrighted material to train the AI or it didn’t. If you can’t prove the damages were done, the damage has already been done.
Read more: AGI firms ‘cannot afford to admit they made a mistake’ — Stuart Russell
Back to Cali
Recapping from above: UCSD researchers say ChatGPT and Claude have passed the Turing Test.
After reviewing the data, studying the facts, and giving it a lot of thought, we’ve decided to issue the following statement: so what?
I once saw a kid lose their mind when a Teddy Ruxpin doll started talking. And don’t forget about Blake Lemoine and Google’s Llamda chatbot.
Humans are super gullible. If they weren’t, more people would know who Max Planck is than David Copperfield. Luckily, science isn’t conducted via the same democratic voting processes that govern who wins American Idol. Even if chatbots trick 100% of humans into thinking they’re talking to a human, it still doesn’t change the fact that they’re not.
It’s interesting to note that chatbots can now convince the average person that they’re human, under the right conditions, rules, and settings. And nobody should be surprised as the machines get better at doing so. AI developers are not literally throwing $7 trillion down the drain, we’re seeing what transformers can do at scale and it is indeed impressive.
But, it’s time we all applied some critical thinking to the problem and stopped approaching the idea of testing for human intelligence via a hypothesis developed by a man nearly a century removed from its potential emergence.
If Turing had lived in modern times it’s highly doubtful that he would be gobsmacked by ChatGPT’s ability to imitate human conversation. Instead, like most people who understand how chatbots work, he’d likely realize that imitating human intelligence and developing it are two different things.
Because, while they didn’t have ChatGPT back in his day, they did have radios. And, presumably, professor Turing never confused the voice coming out of the speakers as belonging to the wires reproducing it.
Critical thinking
What could a chatbot say to convince you that it was alive, capable of human-level intelligence or, at a minimum, able to think and reason?
That’s the question we keep asking ourselves as we parse the daily news, read the latest research papers, and digest the newest blog and social media posts from AI developers around the world.
But that’s not the right question. The right question is, what could a chatbot say that would convince everyone else it was capable of human-level thought?
Learn more: Center for AGI Investigations: Defining Human-Level AI
Art by Nicole Greene