Trump’s tariffs likely based on faulty chatbot math

If only someone had warned us this could happen. 

The American Enterprise Institute, a conservative think tank, recently joined the chorus of economists who believe Donald Trump’s global trade tariffs were calculated wrong. According to multiple sources, the formula used contains a math error that exaggerates its effectiveness by a factor of four. 

Couple that news with the Verge’s previous reporting that ChatGPT, Grok, Gemini, and Claude all output the same broken formula when prompted and what do you get?

A recipe for an “I told you so” pie.

Way back in 2022

On November 18, to be exact, Meta’s Yann LeCun and I had a back and forth on X.com over his company’s “Galactica” AI model. 

Galactica was purported to be a “research assistant” LLM trained on science papers. Meta launched it into beta, apparently in hopes of getting helpful feedback. 

It was a mess. All of its outputs were hallucinations. It gave me a recipe for bathtub napalm that would almost certainly kill anyone who followed it. It generated a research paper on “the benefits of eating crushed glass” that included fake references, studies on its use as a weight-loss supplement, and experiments with methodologies and results.

When I shared my findings, LeCun and some others in the AI research community appeared less than pleased. From their point of view, it seemed to me, I was merely looking for ways to misuse the system just to tear down their work.

During our back and forth on X.com, LeCun asked me: “In what scenarios would this type of generated text actually be harmful?”

My reply: “I mean this with total respect for you and your work, but isn't that the trillion-dollar company's job to sort out before you make it available for public consumption? Well-meaning journalists and academics are going to get fooled by papers this thing generates.”

Trump’s tariffs

With that context in mind, let’s talk about Trump's tariffs. The math used to create them was evidently based on a misinterpretation of a simple formula

I won’t get into the numbers here but, supposedly, the Trump administration based their formula on a well known calculus. But, whoever drafted it apparently miscalculated certain variables related to “deficit divided by exports” causing them to erroneously cancel each other out.

Per the American Enterprise Institute, this means the current tariffs are incorrect. For example, India’s is currently 26%, but when the formula is corrected, it comes to just 10%. This error applies to every country’s tariff calculation. 

You can learn more here, here, and here

So who was the mastermind behind this potentially-catastrophic mistake? To the best of my knowledge, the White House hasn’t commented on the reports yet.

But the Verge’s Dominic Preston has come up with a rogue’s gallery. According to them:

“A number of X users have realized that if you ask ChatGPT, Gemini, Claude, or Grok for an ‘easy’ way to solve trade deficits and put the US on ‘an even playing field’, they’ll give you a version of this ‘deficit divided by exports’ formula with remarkable consistency.”

Consequences, as predicted

It seems probable that someone in a position of authority, likely well-meaning in their own right, used a chatbot to help craft what might be the most globally-impactful economic policy in decades. 

And, because it was likely trained on data that contained human errors, the chatbot generated incorrect information. But, because the authority figure trusted the outputs without verifying them — a time consuming process that often defeats the purpose of using a chatbot in the first place — the entire world could be negatively impacted

The problem here isn’t that chatbots are wrong sometimes. It’s that people both overestimate the abilities of chatbots and the usefulness of those abilities. 

Even if we generously imagine a future where chatbots are accurate 99% of the time, that wouldn’t mean they’re useful. Would you use a calculator that was only correct 99 times out of 100? 

We now live in a world where we have to seriously wonder if the president of the United States of America just tanked the global economy because a popular AI training dataset contains a math error. 

We no longer have to wonder how an LLM’s output could be harmful. Now, it’s a question of just how harmful they can be. 

Read more: Scientists suspect that ‘reasoning models don’t always say what they think’ — Center for AGI Investigations

Blue and Grey mandala featuring robots, lighting bolts, binary code and genie lamps

Art by Nicole Greene

Previous
Previous

What could a chatbot say that would convince you it was intelligent?

Next
Next

The harder you push an ‘AI scientist’ the more Lagrangian it gets