a theory of slop
‘Slop’ has become the go-to word to describe certain kinds of AI content. The term is so prevalent that Oxford University Press had slop as a contender for its word of the year in 2024, defining it as “art, writing, or other content generated using artificial intelligence, shared and distributed online in an indiscriminate or intrusive way, characterised as being of low quality, inauthentic, or inaccurate”. I love the term and want to explore why it feels so perfect for the current plague of AI shite.
One definition of slop is unappetising liquid food, particularly fed to livestock. It’s easy to take this definition and run to the conclusion that the people and platforms that generate and push AI slop content are treating consumers like greedy little piggies, just shovelling any old stuff down our necks because we are so undiscerning. While that may be true in some cases I think it’s pretty clear form the large pushback against slop content that people are not the undiscerning masses that the slop-mongers think we are.
Instead I think this definition of slop is interesting because of what it says about the way the content is being made. Food slop is thrown together from random and uncomplimentary ingredients, often you can see the ingredients that make up its unpleasant whole, just as you can frequently see the disparate prompts fed into an AI image generator in what it spits out e.g. Jesus made from shrimps rising out of the ocean.
Slop is unappetising but it fulfils a nutritional function; it will stop you from being hungry but it is not satisfying. Slop content might fill a hole in an ad where an image should go, or replace the work of human-made illustrations. But, like unsatisfying or disappointing food, it only draws attention to what you wish had been there instead. Unlike an AI chatbot, slop isn’t interactive and functions mostly to create the appearance of human-made content to benefit from advertising revenue or avoid paying people.
The typically uniform texture of slop reflects the uniform texture of AI generated images, both their disconcerting smoothness and the undiscerning gaze of the AI. Due to their training, to an AI all art is valued equally with nothing is picked out as particularly special, interesting or valuable. Therefore, what it generates is a true amalgamation. AIs ‘learn’ without a point of view, they are data (or ‘ingredient’) rich but creatively destitute.
Slop contributes to platform decay, colloquially known as enshittification, where we see online products and services decline in quality over time. The way that AI use has skyrocketed means that we have kicked this decay into overdrive, like minced beef left out in the summer sun. But slop also corrodes its own maker. Research has found that training Large Language Model AIs on slop content causes model collapse – a consistent decrease in the lexical, syntactic, and semantic diversity of the model’s outputs. Since there is so much slop online, it’s inevitable that LLMs are being trained on it as they scrape content from across the internet. Slop begets slop: a perpetual slop machine.
A second meaning of slop that I think is key is ‘to slop’ or carelessly spill liquid. Careless is the key word here, with the widespread use of AI definitely denoting a careless towards the climate, working artists and writers, and the experience of the online world. But I also think that this sense of carelessly slopping fluid is particularly relevant to the sort of content AIs produce. The content often seems in a state of flux. Search answers might change hourly or an AI generated video might be unable to keep things like facial features stable, their appearances morphing from scene to scene.
This carelessness and fluidity means that slop is bleeding into so many aspects of the internet, trickling through into places you wouldn’t expect to find it. There’s a sense of shame or a desire to ridicule when people fall for slop. Clowning on people for creating and using slop I can understand, but it is becoming increasingly difficult to avoid as it creeps its way into more and more arenas and the longer AIs train then the better they will become at producing it (in theory). When it comes to images there’s often a sentiment of ‘HOW could you not tell that it’s AI???’ which I think betrays a sense of aesthetic superiority. Yes, in many cases it is very easy to tell and people should possibly just look a little closer at what they are sharing. But if you don’t expect to see AI content on your friend’s Instagram or on a government website, then why would you be vigilant? Mocking people for falling for AI content, like those who fell for spam emails, will only get us so far.
In some ways slop feels like the spiritual successor to ‘spam’ – either the irrelevant or unsolicited messages, typically to a large number of users, for the purposes of advertising, phishing, spreading malware, or the verb ‘to spam’ as in to send the same content over and over again. The proliferation of slop feels a lot like being in a sea of spam, sifting through the irrelevant images and search results to find what you’re actually looking for, our feeds being spammed with unwanted and inescapable AI content. However, unlike spam which has historically been aggressively policed by platforms like Google and Facebook, slop is allowed to roam free.
I think it’s a result of the changing ecosystem of the internet. Before the internet was as centralised and monetised as it is now platforms were governed by rules that were more like manners: don’t spam because it’s annoying, unfollow and report if you see something offensive etc. Now, because platforms are so ruled by algorithms and advertisers, this is no longer the case. Slop can be monetised so it’s allowed, encouraged even. Yes you can report racist content, but who knows if it will be dealt with – rage bait means clicks, means profit.
Unfortunately, I think the danger that AI slop poses to monetisation and ad revenue will be the only thing that might halt its spread. A recent Forbes report showed that over half of consumers find AI adverts unappealing and the director of UK-based digital marketing agency AccuraCast, Farhad Divecha, says he is now encountering cases where users are mistakenly flagging ads as AI-made slop when they are not. This could be a problem if consumers start to feel like everything they are being served is slop and ignore it. Maybe we will have to work with the money on this one since that’s the only thing these platforms value.