Slop is text you haven’t read,
not text you haven’t written

People complain about LLM generated text and call it slop. A lot of it is. But slop is text you share without reading, not (always) text that you use an LLM to write.

By Gareth Dwyer · April 2026

AI models are good at writing. I say that as someone who still does all of my own writing the old-fashioned way, typing characters out one-by-one on a keyboard. But I read a lot of AI-written text, and I often find it interesting or educational. It’s clear, well-structured, and sometimes persuasive. It’s valuable.

Sure, I hate the LLM tropes too. It’s not just the obvious patterns — it’s the lack of substance. No variation. No subtlety. No surprises. These LLM tics annoy the hell out of me! But if you edit those out, or ignore them, you can’t really argue that the writing produced by these models is objectively ‘bad’ any more.

And yet, our feeds are undeniably filled with slop. You can’t really go near online places we used to hang out to share ideas. Twitter, LinkedIn, Facebook, Hacker News — they’re all filled with the same obviously-LLM generated walls of text. Sometimes this stuff appears at first glance to be interesting, but then turns out not to be, leaving you with a bitter taste in your mouth that you were conned into wasting your valuable time by reading it.

So how do we reconcile the fact that I hate slop but probably most of what I read these days is generated by LLMs?

People focus too much on how something was created and not enough on whether or not it’s valuable

My favourite video on the internet is a guy (Larry McEnerney) talking about writing for over an hour. You should watch the whole thing (and that’s coming from someone who hates videos and never recommends them). If you don’t have an hour to spare, one of the most important points he drives home is that good writing is valuable writing.

Larry’s advice comes from long before slop was a thing, but it’s still very relevant to defining slop. At school, we’re taught that writing should be clear, should be organized and should be persuasive. Larry says:

People will still read valuable writing that is not clear and not organized and not persuasive. But if your writing has all of that and it’s useless then overall it’s still useless.

We had slop before LLMs. Slop isn’t AI-generated writing, it’s AI-generated writing that has no value.

What is valuable writing?

Valuable writing is often writing that teaches us something or tells us we’re wrong about something. Sometimes it makes us feel something, or sometimes it just moves the discourse within a specific niche slightly. Valuable writing is writing that makes you go “I’m glad I spent the time reading this”, or “I think others should spend time reading this”.

Before LLMs, there was a strong correlation between writing that was “clear, organized and persuasive” and “valuable writing”, so many of us got used to using it as a shortcut. It’s easy to skim over something, and if it seems clear and organized, then we can assume that probably someone put some effort into writing it, and therefore it’s worth our time putting some effort into reading it.

But that correlation no longer holds. Now nearly all writing is clear and organized, and valuable writing is as hard as ever to come by.

But that doesn’t mean you should get your pitchforks out and never consume any LLM text or shun everyone who uses LLMs to create text. It just means we need to find new shortcuts to identify valuable writing.

Can we rely on others to identify valuable writing?

One good shortcut to figure out whether reading an article is worth your time is if someone else already read it and tells you that it’s good. If you have some reason to trust that person, then it’s an even stronger signal. And it doesn’t matter if that article was generated by an LLM or not, as long as they read it, and didn’t regret it, so there’s a chance that if you read it, you’ll also not regret it.

For example, here’s a short piece of ‘slop’ that I find interesting. It’s fully generated by ChatGPT, so many people would call it slop. I found it interesting, so I would prefer not to call it slop. If you didn’t click through, it’s ChatGPT answering my question about what will be thought of as ‘obvious’ in 50 years time with 20/20 hindsight.

ChatGPT answering what will be obvious in 50 years with hindsight

I expected some of the output when I asked the question (animal rights, climate change), but some of it I didn’t. I find it interesting that the model puts animal rights and “AI rights” into the same (first) paragraph.

I also didn’t expect it to predict global governance, and the acceptance of human enhancement argument is one I’ve come across before but had mainly forgotten about, so reading it made me think about it again. I found that valuable.

Maybe you didn’t read the link or even the screenshot above because you don’t know me and don’t trust me, so your P(slop) is still high and you filter it out. Maybe you’re not convinced and you still think all AI writing is slop. Maybe you did read it out of curiosity but decided it was slop anyway. Maybe you read it and thought about it or shared it with someone else.

Valuable writing can only be defined in relation to an audience

Valuable writing is relative — writing that might be valuable to me (novel ideas I haven’t thought of before) might be useless to you (something you know well and are bored of). You can’t define a piece of writing as valuable or not valuable without relation to a specific audience.

A paper on particle physics is probably not valuable to your four-year-old (or, let’s be honest, to yourself). Your sibling’s wedding vows are probably not valuable to anyone who doesn’t know them. A book you read five years ago and loved might seem boring now that all of those ideas are fully integrated into your current world view.

So, keep fighting slop. It’s awful and we need to find a way to tame or remove it from our lives. But think about what makes slop slop — I don’t think we can fight the fact that most text will be at least partially LLM generated very soon, and I don’t think all LLM-generated text is slop.