Prelude
Let's get the unpleasantness out of the way immediately. There is a word currently circulating in the tech ecosystem. Slop. It is used to describe the torrent of mediocre, low-effort content generated by Large Language Models. People look at a generic LinkedIn post or a hallucinated article and sneer. They call it slop.
I have a different perspective.
It's not slop, it's shit. And it will become irrevelent.
The distinction matters. "Slop" implies a byproduct of a machine. "Shit" implies a failure of standards. And we have been here before.
London. 1894. The city is drowning. Not in data. In manure.
This is the Great Horse Manure Crisis. By 1900, London had over 11,000 hansom cabs and several thousand horse-drawn buses, each requiring 12 horses per day. That's roughly 50,000 horses moving people through the city daily. Each horse produced 15 to 35 pounds of manure per day, plus approximately two pints of urine. New York's 100,000 horses generated 2.5 million pounds of manure every single day. The streets were caked in it. Flies bred in the rotting heaps, spreading typhoid fever. Dead horses were left to putrefy because they were easier to dismember once decomposed.
The Times predicted that within 50 years, every street in London would be buried under nine feet of manure. In 1898, the first international urban planning conference convened in New York to address the crisis. It was scheduled to run for ten days. The delegates abandoned it after three. They could see no solution.
Then the automobile arrived. By 1912, the horse was obsolete. The problem was not solved by shovelling faster. It was rendered irrelevant by a paradigm shift.
This pattern repeats throughout history. In the 1850s, American whaling ships dominated the world's oceans, over 700 vessels hunting sperm whales for the oil that lit the lamps of civilisation. By the time kerosene emerged from the first commercial oil well in 1859, the industry was already straining under depleted whale populations and rising costs. Within a decade, kerosene rendered whale oil economically irrelevant. The whalers who had built their lives around the hunt watched their entire industry become obsolete. Not because anyone banned whaling. Because something better arrived.
We are currently standing in the digital equivalent of 1894 London. We are looking at the piles of AI-generated text clogging up our search results and social feeds. We are holding our noses. But if you think the solution is to ban the horse (or the LLM), you are missing the automobile driving right past you.
The question isn't "Did a robot write this?"
The question is "Is it good?"
The Legitimate Grievance
The current discourse around Generative AI in content creation is dominated by two camps shouting past each other. But before we dismiss the critics, we need to acknowledge something important: some of them have a point.
Artists, writers, and creators have watched their work scraped from the internet and fed into training datasets without permission, compensation, or credit. This is not paranoia. It is documented fact. Stable Diffusion was trained on LAION-5B, a dataset that included copyrighted artwork, personal photographs, and medical images. Large language models have been trained on books, articles, and code repositories without the consent of their creators.
The anger is legitimate. If you spent years developing a distinctive artistic style, only to see an AI generate "in the style of [your name]" for anyone with a keyboard, your frustration is not irrational. The ethical dilemmas regarding AI training are real and unresolved.
But here is where I part company with the purists.
The genie will not go back in the bottle.
We can debate the ethics of how we got here. We can advocate for better licensing, compensation frameworks, and consent mechanisms. We should. But the technology exists. The models are trained. Demanding that we "uninvent" generative AI is like demanding that we uninvent the printing press because it put scribes out of work.
I see engineers and writers puffing out their chests, declaring that they will "never" use AI. They wear their inefficiency like a badge of honour. They view the struggle of the blank page as a religious rite.
(I have spent enough time debugging "human-written" code to know that human origin is no guarantee of quality.)
The critical question is not whether AI training was ethical. It is what we do now. And the answer is not to pretend the technology doesn't exist. The answer is to use it responsibly, advocate for fairer systems, and focus on what actually matters: the quality of the output.
This is the "AI Aversion" phenomenon. It is a psychological barrier. It is not based on the quality of the output. It is based on the knowledge of the source.
And it is cracking.
The Cracks
Here is where the argument falls apart.
The average human output is mediocre.
I say this as someone who hires engineers. I say this as someone who reads documentation. Most human-written content is functional at best and incoherent at worst. We romanticise human creativity, but we conveniently forget the mountains of human-generated drift that fills the internet.
The orthodoxy claims that users hate AI content. The data suggests otherwise.
Research into audience perception of AI content reveals a fascinating contradiction. Users claim they want human content. But when they are presented with high-quality AI-assisted content without being told the origin, they engage with it.
In fact, studies have shown that Generative AI tools can achieve similar levels of engagement to human-generated content. The machine is capable of producing work that resonates.
So if the user enjoys the content, learns from the content, and engages with the content... does the "soul" matter?
If I read a documentation page that perfectly explains how to implement a complex graph database query, I do not care if the author cried while writing it. I do not care if they had a "human experience." I care that it works.
The cracks in the anti-AI argument are widening because the utility is undeniable.
We are seeing a shift. The impact of AI on content quality is not a downward spiral. It is a bifurcation. The lazy use AI to generate "shit" (the manure). The smart use AI to elevate their work (the automobile).
The "AI Aversion" is real, but it is fragile. It relies on the user knowing the content is AI-generated. It is a bias, not a quality assessment. Even factual AI content is perceived as inaccurate simply because it is labeled as AI.
This is not a sustainable position. You cannot hate a result simply because you dislike the method. That is ideology. Not engineering.
The Deeper Truth
Let's talk about how builders actually use this stuff.
I am a software engineer. I build systems. When I look at content creation, I do not see a magical process of divine inspiration. I see a pipeline.
- Ideation (Input)
- Drafting (Processing)
- Refining (Optimization)
- Publishing (Deployment)
The anti-AI crowd thinks GenAI replaces the human in the entire pipeline. They imagine a world where we type "write me a blog post" and hit publish.
That is the "shit" tier. That is the manure.
The deeper truth is that AI is a force multiplier for the architect.
I use AI to write. I use it to code. But I do not let it drive.
I treat the LLM as a junior engineer. A very fast, very well-read, slightly hallucinogenic junior engineer. I give it a spec. It generates a draft.
Then the work begins.
I tear it apart. I refactor the arguments. I inject the nuance. I verify the facts. I impose my specific, earned experience onto the structure it provided.
This is the Hybrid Strategy. And it is the only way forward.
When I work this way, I am not "cheating." I am operating at a higher level of abstraction. I am no longer bogged down in syntax errors or writer's block. I am focusing on the logic. I am focusing on the message.
The ownership does not come from typing the characters. It comes from the vision.
If I architect a microservices system, and I use a library to handle the HTTP requests, did I not build the system? If I use Copilot to generate the boilerplate for a React component, is the application not mine?
Content is no different.
The creators who embrace this truth are finding something surprising. They are not losing their "voice." They are finding it. They are shedding the drudgery of the blank page and spending their energy on the high-value tasks. They are stopping overspending on manual labour and investing in strategy.
The definition of "quality" is shifting. It is no longer "did a human write this?" It is contextual fit and depth of understanding.
A human writing generic fluff is worse than an AI writing a targeted solution.
The "god agent" myth in software, the idea that one AI will do everything, is collapsing. We are moving to specialized tools. The same applies to content. We are moving from "AI writes everything" to "AI augments the expert."
Implications
So, what happens when the manure piles up?
We are entering a period of saturation. There is no denying it. The cost of generating text has dropped to near zero. We will see a flood of content.
This is where the Luddites panic. They see the volume and assume the value of all content drops to zero.
They are wrong.
When supply becomes infinite, curation becomes the only asset that matters.
Trust becomes the currency.
If I can generate 100 articles an hour, nobody cares about the articles. They care about which one is right.
This means the role of the creator changes. You are no longer just a writer. You are a Verifier. You are a Tastemaker. You are a source of Truth.
The biggest concerns regarding Generative AI ethics, plagiarism, bias, accuracy, become your competitive advantage. If you can filter the manure and find the gold, you win.
For businesses, this means the adoption of AI is not optional. Gartner predicts 80% of companies will be using GenAI by 2026. The companies that use it to generate "slop" will fail. The companies that use it to empower their experts to move faster will dominate.
We will see new standards emerge. Just as the automobile required traffic laws and paved roads, the AI content era will require new verification protocols. We will likely see cryptographic signing of content to prove human oversight (not human origin, but oversight).
The impact of AI on writing quality perception will stabilize. We will stop asking "Is it AI?" and start asking "Is it accurate?"
This is the hard truth for the purists. The market solves for utility. If an AI agent can give me the answer I need in 3 seconds, and a human writer buries it in 2000 words of "soulful" narrative about their grandmother's recipe, the AI wins.
Every time.
Conclusion
I have been building software for a long time. I have seen frameworks come and go. I have seen paradigms shift.
The pattern is always the same.
First, denial. Then, anger. Then, adoption.
The people screaming about the "soullessness" of AI are standing in the middle of 1894 London, shouting at the horses. They are knee-deep in the problem, refusing to look at the solution.
You can be a Luddite. You can refuse to touch the tools. You can pride yourself on your manual labour.
Or you can recognise that the world has changed.
The manure problem was solved. Not by going back. But by moving forward.
The "slop" will wash away. The "shit" will be ignored.
What remains will be the work of builders who learned to drive the car.
Now if you will excuse me, I have a backlog to clear. I'm going to let the machine handle the boilerplate. I have actual work to do.
References
- The Great Horse Manure Crisis of 1894 - Historic UK
- The Great Horse-Manure Crisis of 1894 - Foundation for Economic Education
- Harvesting Light: New England Whaling - Yale Energy History
- Ethical Considerations In Ai Generated Content Creation
- Stop Overspending Ai Content Generation Is Smarter
- Generate Article Ai
- Impact Of Ai On Content Quality
- The Automation Of Creativity Ai In The Printing Industry
- Ethical Dilemmas Ai
- Generative AI Ethics 8 Biggest Concerns
- Generative Ai Ethics
- 380523040 The Effects Of Perceived AI Use On Content Perceptions
- What Audience Perception Of Ai Content Really Reveals
- Understanding Impact Ai Writing Quality Perception
- Whats Your Perception Of Ai Generated Content Quality
- AI Human Or A Blend How The Educational Content