
Whenever AI makes you ‘feel’ something, you’re being manipulated
In other words: You’re the middle segment in a human-AI centipede.
Should you be polite to AI? Here’s what the research says
If the basic idea is to get the most utility out of chatbots, it seems counterintuitive to train them to waste tokens and time outputting polite language.
Putting LLMs inside of robots won’t solve the embodiment problem
Chatbots don’t actually exist. No, we’re not trying to create a conspiracy theory. What we mean is this: large language models (LLMs) aren’t “entities” or “beings.”
Numbers that lie: AI reaches 95% accuracy on medical diagnostics by reward hacking
A team of researchers at RespAI Lab, KIIT Bhubaneswar, KIMS Bhubaneswar, and Monash University, Australia today published a fascinating preprint research paper that takes a mighty bludgeon to the notion that AI can predict medical diagnosis.
AI isn’t coming for your job, mediocrity is
LLMs are too stupid to be useful for professional work, but that won’t stop content creators from shooting themselves in the foot with it.
What could a chatbot say that would convince you it was intelligent?
Here’s a fun way to spend 10 minutes: try to think of something ChatGPT, Gemini, Claude, or Grok might output to convince you they’re capable of thought, agency, or sentience.
Trump’s tariffs likely based on faulty chatbot math
The American Enterprise Institute, a conservative think tank, recently joined the chorus of economists who believe Donald Trump’s global trade tariffs were calculated wrong. According to multiple sources, the formula used contains a math error that exaggerates its effectiveness by a factor of four.
Anthropic bends over backwards calling transformer work ‘thought’ in 2 new papers
In two new papers, Anthropic describes a new methodology by which developers are attempting to peek under the models’ hoods to discern exactly what’s occurring inside.
AGI firms ‘cannot afford to admit they made a mistake’ — Stuart Russell
Recent comments from world-renowned British computer scientist Stuart Russell indicates that the sector of AI developers and organizations dedicated to scaling large language models to “human-level” may have reached that point, financially speaking.