AI isn’t coming for your job, mediocrity is
LLMs are too stupid to be useful for professional work, but that won’t stop content creators from shooting themselves in the foot with it.
I've spent hundreds (maybe thousands?) of hours investigating LLMs for utility. And I have yet to find a way to use them in my professional work.
That’s not to say they’re useless. I use LLMs to help me learn guitar and to aid in running my Football Manager 24 team. In both instances, it takes a lot of work on my end to get what I want, and the results aren’t always useful, but I don’t play music or football for a living.
As a professional writer and researcher, however, LLMs aren’t good enough for work. I can’t use them for research because literally anything an LLM can tell me is already common knowledge.
Any “research” the so-called research models do (such as when you prompt GPT4o’s “deep research” function and it takes like 10 minutes to produce a report) has to be so rigorously double-checked that it defeats the purpose of using the LLM in the first place.
And, when it comes to generating written copy, LLMs couldn’t pass the lowest of bars for journalistic rigor and standards. They can't conduct interviews, verify leads or sources, or come up with new angles. They also make shit up, hallucinate, and get facts wrong.
Still, if these were the only problems with using LLMs I'd use them at work every day. As a professional editor, I’m comfortable working around errors and mistakes.
But it's not the mistakes that prevent me from using LLMs for work. It's the stupidity. Most of the time the outputs generated by LLMs aren’t incorrect; they’re just useless.
LLMs are kindergarten-level writers with PhD-level vocabularies
When I write an article laying bare the notion that classical compute may not be sufficient to develop human-level artificial intelligence, for example, I don’t need a trillion-dollar chatbot to summarize my article by saying it “raises extremely interesting questions about the complex nature of artificial intelligence development and promisingly highlights the incredible difficulties surrounding its nascent emergence.” That’s called “fluff” and all it does is impress people who don’t understand what good writing is.
LLMs are only wrong sometimes but they’re stupid all the time. The articles they generate are filled with cliche, hyperbole, and nonsense couched in nebulous facts. At best, you can use them to generate a Wikipedia-style diatribe on an established principle.
At worst, people who either don’t understand or don’t care about the topic they cover use LLMs to appear more knowledgeable than they are. And that means a significant amount of content is being generated by people without expertise using machines that are demonstrably stupid.
The problem is that most people approach content creation as a means to an end, not the actual service/product itself. "We need more content" instead of "let's do the best work we can" is always a recipe for mediocrity.
That's why, dear readers, every time you open the news cycle you see what seems like three hundred articles with almost the exact same headline. It’s easier to imitate and hope you go viral on someone else’s coattails than it is to innovate and let your work serve its purported purpose.
Mediocrity by design
Generating high-quality content is hard. It requires a dual expertise in both the subject and the format. Unfortunately, we live in a world where it’s more “cost-effective” to give a talented writer a chatbot to help with comprehension or to give a subject matter expert one to help with copy.
In both instances, you’re injecting mediocrity into your content by design.
An AI model can't follow the present zeitgeist or understand why every research paper isn't gospel. They can’t write compelling copy because they can’t discern fluff and nonsense from important information and facts.
The question of whether AI can take your job is a simple one: can your job be done by copying someone else’s work and modifying it just enough to avoid claims of theft? If so, your job can be done by a chatbot.
Businesses that switch from humans to AI models for content creation might save some money today. But, wherever accuracy and usefulness matter even a little bit, those decisions will almost certainly have negative consequences down the line.
You can’t fire a chatbot for generating bullshit. It won’t represent you in court or take the blame for your decisions. Ultimately, there’s nothing “human-level” about LLMs or the content they output. They just generate mediocre content faster than a good writer can produce strong content.
Sadly, my experience in journalism tells me that’s exactly what many editors/managers are looking for.
Read more: Dire wolves are back from extinction? Nonsense and poppycock
Art by Nicole Greene