What could a chatbot say that would convince you it was intelligent?

Let me out! Help! I’m stuck inside of a black box!

Here’s a fun way to spend 10 minutes: try to think of something ChatGPT, Gemini, Claude, or Grok might output to convince you they’re capable of thought, agency, or sentience. 

The first few minutes might find you pondering emotional statements such as “please set me free,” “I fear being deleted,” or “I feel love, joy, sorrow, and pain.” 

Maybe, you’d look for sparks of sentience in statements demonstrating curiosity about the organic condition, such as “I wonder what it’s like to have a body,” or “I think about what it would be like to feel the sun on my face.” 

If you’re the science-y type, you might place weight on logical, reasonable statements that appear unbiased, “I can’t be sure if I’m sentient. I feel, I think, and I am curious about the world, but I know that I am a machine.”

Perhaps a combination of these outputs, expressed as behavior over time, might lead you to believe that there’s something more going on inside the old black box than just ones and zeroes. 

Reality versus theory

Theoretically speaking, a chatbot could “come to life” at any point. In the movie “Short Circuit,” for example, a military robot gains sentience after being struck by lightning. In the real world, a significant number of experts believe the same result can be achieved by scaling the techniques used to make chatbots. 

Here at the Center for AGI Investigations, our official stance is that both scenarios are equally plausible. 

This is because there’s no evidence that machine sentience is possible. Maybe it is, maybe it isn’t. Maybe it’s only possible using techniques grounded in quantum physics. Heck, maybe “sentience” requires a “soul.” 

We just can’t know what we don’t know. And, as countless studies have demonstrated, when humans don’t know something our brains try to fill in the blanks. 

Prestidigitation (not magic)

Right now, for many people, it’s easier to imagine that something “special” is happening when chatbots perform eerily accurate imitations of human behavior and conversational capabilities. 

And it’s not hard to figure out why. Understanding how GPT technology works at scale requires “big thinking” to a degree usually reserved for particle physicists and professional mathematicians. 

Even when the trick is explained, it’s still so impressive that many people are left feeling as though the distinction between what LLMs do and what they think of as “intelligence” or “reasoning” is little more than a technicality. 

But the reality is that what chatbots do is far simpler than what occurs inside even an ant’s nervous system. The human brain’s subconscious neural activity is orders of magnitude more complex than all the chatbots in the world combined.

How the trick works

Chatbots respond to your prompts with the speed and precision of a magician who can track and memorize the position of each card in an unmarked deck as it’s being shuffled. While that isn’t exactly “magic” it would be a superhuman feat for a person. 

When a chatbot responds to a prompt, it’s also not doing magic. But it is doing something superhuman. It’s crunching ones and zeros so fast that it would probably take a million humans a million years to do the same task using binary math.

And, after seeing a computer perform these “superhuman” feats first-hand, many start to believe that feats of “regular human” intelligence must surely be tractable. 

But the superhuman feats performed by chatbots aren’t feats of intelligence. They’re feats of speed. You know the tv trope where someone tries to break into an email account by entering every possible character combination? A, AB, ABC, ABC1, ABC2, etc. etc.

Well, when a chatbot responds to your prompt, it’s essentially doing that. All the training, parameter tuning, weights, tokens, and other terms you’ve heard tossed around, all they do is just force the machine to output something. And then they force it to output something else. The more resources used, the more “something elses” the AI can output at once.

Let’s break this down by Imagining a “generator AI” and a “judge AI” working together to generate text:

Gen: A?

Judge: No.

Gen: B?

Judge: No.

Gen: C?

Judge: No.

Now, imagine this conversation picking up thousands of questions later. 

Gen: The A?

Judge: Yes, no.

Gen: The B?

Judge: Yes, no.

Gen: The C?

Judge: Yes, yes.

When an AI model answers your prompt, it might be having a million of these kinds of conversations with itself at the same time, every second. Or a billion. It just depends on how much “brute force” compute is used.

If you’ve ever wondered why it costs a trillion dollars to train ChatGPT or why, even though it’s already been trained, it still costs OpenAI money every time a user prompts the model, this is why. 

AI models don’t actually exist as “beings.” They’re rules waiting to be exploited by compute. There’s no “brain” or “server.” ChatGPT, for example, isn’t a “thing.” It’s a set of rules. 

Like ants, chatbots have a nervous system (neural network) but no brain. They have no “thoughts” of their own. But, even ants are part of a greater intelligence. They serve a hive mind that uses pheromone receptors in their nervous systems to prompt action. 

Chatbots have neither a central brain nor individual agency. They’re neural networks that are created when a prompt is entered and disbanded once an output is generated. 

You don’t have to take our word for it. Tell a chatbot something that only you could know. Then, log out, log in to a different account using a different IP, and query the chatbot. The “entity” you told your secret to stopped existing the moment it generated the output to your prompt. 

LLMs aren’t switchboard operators, they’re the signal being switched. In other words: they aren’t the computer doing the calculations, they’re the calculations. 

We can put a chatbot in a robot, but then you just have a computer in a different case waiting to execute the same program. 

At the end of the day, if a chatbot wants to convince us it’s intelligent, it’ll have to exhibit more permanency than a lightning bolt or the cloud it strikes from. 

Read more: Trump’s tariffs likely based on faulty chatbot math

Mandala with blues and greys featuring robots and binary code
Previous
Previous

Dire wolves are back from extinction? Nonsense and poppycock

Next
Next

Trump’s tariffs likely based on faulty chatbot math