What, exactly, does an AGI investigator do?

Stick 'em up robot, there’s nowhere to run and we got you dead to rights.

I’m the so-called “Lead Investigator” at an independent, open science, public interest research group called the Center for AGI Investigations. The reason I say “so-called” and put Lead Investigator in quotation marks is because it isn’t a normal job. 

I’m a bit worried that it sounds like I’m planning to travel the world, storming into technology offices and research laboratories shouting, “Wait! I’ll be the judge of that!” You know, the kind of job you wear a formal suit and cape for. 

That’s not what I do. I’m trying to ensure nobody convinces the general public that “human-level” intelligence has been reached by a machine without showing their work. 

I am not a graduate of the fictional Academy of AGI Inspectors. And I’m definitely not developing an AGI system myself (or working with anyone that is). But I am qualified to do what the Center does because I meet the bare minimum requirements prescribed by our core values:

  1. Integrity

  2. Curiosity

  3. Humanity

Our goal is to develop a scientific method for investigating claims related to the emergence of AGI. Not for us, not for Big Tech, but for everyone. That’s why we’re an independent, open science group working in the public interest. 

Figuring out how to falsify intelligence claims is not a simple task. If I’m being completely honest (core value number one, above) I’m not absolutely certain it’s possible. 

But, then again, how could I be? Let’s look at this from a different perspective. 

The Center for Ghost Investigations

If I were the Lead Investigator at the fictional Center for Ghost Investigations, my job would be quite similar to what it is at the not-fictional Center for AGI Investigations. 

Comparison: 

  1. I don’t know if the current state-of-the-art and cutting-edge efforts toward “human-level” artificial intelligence will work. 

  2. I don’t know if people can become ghosts after they pass. 

  3. I don’t believe anyone has found a way to use the scientific method to detect the presence of a ghost or confirm the general existence of ghosts.

  4. I don’t believe anyone has demonstrated a way to use the scientific method to detect and confirm “human-level” artificial intelligence. 

There’s no amount of money that will convince me ghosts are real or that an LLM has reached human-level sentience just because it, an eye-witness, or its developers, say so. 

No combination of words in any language could convince me either of these things existed. I’m a natural born skeptic. I don’t even believe what my own eyes and ears see and hear; I put my trust in science. 

But I am infinitely curious. That’s part of my humanity. I just can’t help but know for sure. And, if I can’t know for sure, I need to know what I can’t know so that I can figure out what I can. Then I want to know that. 

So, here’s what I would do as Lead Investigator for the aforementioned fictional Center for Ghost Investigations:

  1. Review the literature, confirm my notions concerning the difficulty of using the scientific method to demonstrate the existence or presence of ghosts.

  2. Order pizza and watch Harry Houdini documentaries for a few hours.

  3. Begin research toward a holistic methodology for debunking claims related to the emergence of ghosts.

  4. Keep debunking until we come across something we couldn’t debunk, then start back over at square one.

This job would be way easier than my real job as the Lead Investigator for the Center for AGI Investigations.

There are many similarities between the two, but the differences are glaring:

  1. Ghosts and AGI both, theoretically, operate in what could be defined as a different plane of existence. Ghosts, purportedly, are spectral entities that coexist in our reality and their own. Likewise, an AGI would purportedly emerge from a black box of its own data processing machinations. 

  2. IF ghosts are real, AND all the spooky stuff associated with their presence is actually the result of actual ghosts, then we could know for certain that ghosts have agency in what we perceive as base reality.

  3. But, because all the functional activity associated with the emergence of an AGI takes place inside of its own black box, AI doesn’t already have agency in what we perceive as base reality. This is called the embodiment problem.

The Center for AGI Investigations 

Oh, if only we were the fictional Center that investigates ghosts. The work would be much simpler. But we’re not because we don’t currently have any reason to believe that we can contribute to current efforts in the field of ghost detection, among other reasons.

I don’t have a theory about how ghosts might emerge. But, categorically, all intelligent humans are human-level intelligences. My fellow people, that’s what you call a control group.

With that, I believe I can do science. 

Our work is practical. The reason why we investigate AGI isn’t because we think we’re the only ones who can tell when a “human-level” artificial intelligence has emerged. I don’t think anybody knows how to do that, at least not with the scientific method. 

That’s why my job focuses on finding ways to develop an empirical methodology for determining that something is NOT a human-level intelligence. 

Here’s my thinking:

  1. AI either has or has not reached “human-level” intelligence.

  2. If it has, I will not be able to demonstrate a fail-state for human intelligence through practical testing.

  3. If I can demonstrate a fail-state for human intelligence through practical testing, AI has not reached “human-level” intelligence.

Where things get tricky is in the embodiment and agency aspects of “practical testing.” Because, while I can argue that humans also live inside of black boxes — what is a mind if not a prison of its own dimension? — I can’t quite figure out how to connect the AI’s “mind prison” to our reality. 

  1. I don’t believe that the text/images/audio an LLM generates is a direct representation of whatever would be analogous to its “thoughts.” I’m unconvinced that the machines are communicating with the outside world when they process and respond to prompts.

  2. I think, when we read what an AI has outputted, we’re getting a third-party interpretation of what’s actually happening inside the black box (see: Searle’s translation room argument) sort of like asking a psychic to tell us whether Schrodinger’s Cat is alive or dead. Why can’t we hear any meows? How thick is this box?  

  3. Any “emergent intelligence” inside of a black box would, in my hypothesis, be incapable of knowing that its creators exist. 

  4. Such an artificial intelligence would, theoretically, have emerged from binary processes. 

  5. The evidence seems to suggest that naturally-occurrent human intelligence emerges from quantum processes.

  6. Thus, I believe, a binary-based “human level” artificial intelligence’s creators would be incapable of directly interfacing with it. Our prompts would just be more ones, more zeros, more ones, more zeroes ad nauseam, in infintum. The faces and voices of this machine’s creators would be indistinguishable from the rest of its base reality

If you’ve read this far, I can imagine you have your own thoughts on what that would mean for humankind. As the Lead Investigator for the Center of AGI Investigations, I’d like to hear them. Because we think it’s pretty important to the public interest that we all get this stuff right.

Contact

Art by Nicole Greene

Previous
Previous

Gemini can think, AGI benchmarks measure progress, and ‘powerful AI systems are going to arrive in the next few years’

Next
Next

The Center for AGI Investigations: Who, what, where, when, why, and how