On slop

Slop. Yuck. A horrible word. Flop flip plop plap slip slap slop.

Muddy, slimy, wet pigs, lined up at the trough, waiting for the morning feed. Oliver Twist, asking for just a little more gruel.

But what is slop?

Merriam-Webster named “slop” its 2025 Word of the Year, sterilizing it into: “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” Low quality, high quantity, and produced by AI. Lots of nothing, authored by no one.

Definitions fail us by starting and ending at denotation. They don’t evoke mental imagery. What does “slop” look and feel like?

In 2023 and 2024, we met Shrimp Jesus. Fake images of little girls holding puppies during floods flourished on Facebook feeds. Stories from nonexistent war veterans, sharing the horrors of their confabulated memories.

But the generative AI models are more powerful now. Which means the slop of 2026 is much more pernicious than in previous years. It’s sneakier, more subtle. It sounds smarter, more reasonable. 2026 slop isn’t Shrimp Jesus. 2026 slop isn’t just thoughtfluencers writing paragraphs on LinkedIn (though there’s still plenty of that).

2026 slop is created by people claiming they can now do things they don’t know how to do—people writing about things they don’t understand, empowered by unconscious machines that equate text ingestion and generation with “thinking”.

2026 slop is the false confidence of LLMs, infiltrating our human consciousness. It is the emergence of the half-baked cyborg; it’s the transhumanist’s shadow self.

2026 slop is created by the person who believes they are technomancer incarnate, fueled by our electricity grid, caffeine, ego, and real ideas people once wrote and understood—now resurrected as unwilling zombies, as ghosts with newly-unfinished business—propped up by MBAs and money that is on fire.

2026 slop is the normalization and proliferation of intellectual laziness.

The only way to combat this new era of slop is to use our actual human brains, powered by real neural networks. It’s our human responsibility to resist the temptation of creating and sharing slop. Just because something sounds good doesn’t mean it is good.

LLMs are powerful, and we would be foolish to ignore their undeniable advancements. But we cannot give up on critical thinking just because these tools make it seem like they’re doing the thinking for us.

If an LLM generates something for you, I dare you to read it—all of it. Can you make sense of it? If not, it’s slop. It’s your moral obligation to try again.

We must use our brains. It is a privilege to have one.

Next
Next

Who Said It: Wordle Bot or Hinge Date Who Brought You to Bar Trivia?