The AI ick

“We shouldn’t be using ChatGPT for this,” said my colleague, glancing at the draft I’d just sent him.

“I agree. That’s my writing.”

“Oh.” He paused and read a bit. “Well, the em dashes and the structured paragraphs make this seem like AI slop, even if the content is there.”

“Thanks for the feedback,” I said. Then I flung my laptop across the room and leaped to my feet. “Those are my em dashes,” I growled, pounding the table. “And I always write in structured paragraphs. I’m an English major.”

*[Editor’s note: Em dashes are also house style at Stack.]*

Okay, no laptops were thrown and no tables were pounded, but I was a bit affronted. It was the first and only time someone had vocalized the assumption that my work was AI-generated, and it made me wonder if anyone (incorrectly) perceives the content I write as AI slop.

Why did the idea that someone might label my work as AI-generated make me feel both icky and irritated? Why was I so eager to deny using a tool that hundreds of millions of people are using? Why did I slip in that defensive “incorrectly” a few sentences back?

Initially, I wanted to write about the supposed telltale signs of AI-generated text and whether those signs actually reveal anything. In other words, I wanted to defend my em dashes—and I will.

But as I thought more about the subject, a straightforward blog post became an ontological exploration into how we perceive, understand, and experience AI-generated content.

Because—as nearly anyone can tell you even if they can’t quite explain why—there’s a significant contrast between how we experience AI art and how we experience art (visual, musical, or literary) created by humans.

What is the nature of that contrast, and what does it tell us?

### First, a Disclaimer

I don’t consider the articles and other content I write as part of my job to be art, necessarily. But that content is my body of work—the product of my effort and experience.

Like every other marketing writer I’ve ever worked with, I take pride in my writing and its attribution. The line between work product and art is not always a bright one.

For the purposes of this article, at least, please forgive some conflation between the two.

### Do These Em Dashes Make Me Look Like a Clanker?

The suggestion that em dashes are a hallmark of AI writing doesn’t come out of nowhere.

Wikipedia’s extensive field guide to patterns associated with AI-generated content covers:

– **Style:** Overuse of em dashes, section headings zhuzhed up by emoji
– **Language and grammar:** Overdependence on the rule of three, weasel words
– **Broader content issues:** Superficial analysis, overly promotional language, undue emphasis on a subject’s symbolic importance or media coverage

The field guide leads with a crucial disclaimer:

> “Not all text featuring these indicators is AI-generated, as the large language models that power AI chatbots are trained on human writing, including the writing of Wikipedia editors.”

Herein lies the irony: many of the tells associated with AI writing stem from the professional and academic writing on which those LLMs have been trained.

We taught them to use em dashes; to use a detached, neutral tone; to use didactic little disclaimers like, “It’s important to know.”

AI is like a college student who’s picked up the vernacular of academic writing but whose output, upon closer inspection, reveals how little they actually understand.

### Why Are Writers So Defensive About AI Attribution?

Most writers I know would be offended to have their work identified as AI-generated. Why?

In part, it’s simple: We don’t want our work miscategorized as AI writing because AI writing sucks.

But there’s more to it than that.

Quite apart from the linguistic, stylistic, and substantive problems on full display in any chunk of AI-generated text, AI writing feels hollow.

“Just product, no struggle” was my colleague Ryan’s succinct take.

It’s akin to how data-driven decisions made by LLMs may be logically sound and reasonable, but they are not innovative.

AI output is entirely data-driven: statistical representations of a model’s training data.

Hence the notion of LLMs as stochastic parrots—capable of mimicking human speech without having the faintest notion of its meaning.

### Getting the Ick

Stochastic parrot vibes are undoubtedly part of why we feel disappointed, unsettled, or even disgusted or betrayed when we realize we’re looking at AI-generated content rather than something created by a human.

Friend request from an AI character? Yuck.

Tilly Norwood on TV? Hard pass.

A much-memed comparison between AI art and the fae of folk wisdom is revealing:

> “Recognizing an AI image is basically the same rules as recognizing the fae in old tales. Count the fingers, count the knuckles, count the teeth.”

The analogy makes plain our instinctive disquiet with AI output, along with our deeper distrust of the technology itself:

> “Be very, very careful,” goes one version, “because [AI] is stealing people’s faces and voices.”

For me personally, AI content is immediately offputting in a way that’s hard to describe but impossible to deny.

The blank symmetrical faces of AI-generated models, untroubled by quirks of genetics or personality, inhabit an uncanny valley whose emotional and contextual emptiness is repulsive.

There’s no there there.

You might say AI content lacks a certain *je ne sais quoi*, in which the *quoi* is the ineffable human factor.

### Generational Perspectives on AI Content

My colleague Phoebe is Gen Z, which feels potentially relevant to me, an elder millennial, because research has shown that people from different generations tend to engage with AI in sometimes wildly different ways.

I asked her how she feels when she identifies a piece of content as AI-generated.

> “My main interaction with AI slop has (obviously) been on social media,” she said, “and it’s unfortunately reached the point where I see [AI content] and just skip. With more and more AI videos hitting my feed, my skin starts to crawl once I realize the images I’m seeing of a cute dog or bunnies on trampolines that at first gave me warm fuzzies are actually generated by a machine. The soul of it just leaves for me, and it feels unnatural.”

Phoebe also reports:

> “One of the memes on TikTok now is for people to post ‘Is this AI, I can’t tell’ under videos that are clearly not AI as a joke.”

It’s a level of irony and abstraction that would not have resonated just a couple of years ago.

I asked Ryan the same question: How do you feel when you recognize something you’re watching/reading/listening to as AI-generated?

> “It depends,” he said. “If it’s something that’s ephemera around the thing I’m looking at, like a blog header or background in a game, I’m mildly disapproving, but I get it. If it’s the thing I’m looking at itself, then I feel betrayed and a little disgusted. I want to ride someone else’s brainwaves when I’m reading/viewing art/watching movies.”

> “A lot of art is just product anyway,” Ryan acknowledged (thinking, I assume, of *Tron: Ares*), “so it’s soulless crap made by people, but AI outputs are by definition soulless. There’s no authorial intent, just stats.”

He also referenced a question that’s become a familiar refrain in conversations about AI content:

> “Why should I bother to read something you didn’t bother to write?”

### AI Detectors Don’t Work—but They Tell Us a Lot

As soon as we had AI text generators, we had AI text detectors.

These tools (powered, naturally, by AI) promise to determine how much of a given text is AI-generated.

Approximately one minute later, we had AI humanizers like UnAIMyText, which promise to make your AI-generated text sound like something written by an actual breathing person.

As you’d expect, many users of AI detectors are teachers trying to determine whether their students actually did the homework.

And many users of AI humanizers are students trying to get an AI-generated paper past those same teachers.

But many people, not least the students themselves, see a fundamental contradiction at play here.

Even as school policies communicate to students that AI writing tools are to be avoided or, failing that, used surreptitiously, kids and young adults absorb the message that they must use AI to be competitive in a daunting job market.

They’re aware of the ubiquity of AI tools and the paucity of entry-level roles; from their perspective, if they don’t work with AI, they’ll lose their future job to someone who does.

Tools built to detect or disguise AI reveal our societally mixed feelings about the technology itself.

On one hand, AI tools promise that—with no investment, no skills, and only a little time—you too can create text, images, or immersive videos to pass your class, sell your product, or impress your boss. Of course people are going to use them.

On the other hand, detectors and humanizers underscore our persistent discomfort with the whole concept: We wouldn’t be building tools to reveal or conceal AI-generated content if we saw it as an uncomplicated positive.

All that said, we buried the lede a bit here: The fact is that AI detectors don’t work.

Various tools have determined that Jane Austen’s *Pride and Prejudice* (1813) was AI-generated; ditto the United States Constitution (1787).

And detectors aren’t just ineffective; they can do incalculable professional, academic, and reputational harm to people accused of using them.

Across numerous studies, AI has demonstrated bias against non-native English speakers, Black students, and neurodiverse people, among other populations.

Universities including MIT and the University of San Diego have said plainly that:

> “AI detectors don’t work” and “AI detectors are problematic and not recommended as a sole indicator of academic misconduct.”

A guide for instructors at Northern Illinois University called AI detectors an ethical minefield because accusations based on false positives can wreck students’ academic careers, while an article in *Inside Higher Ed* explores why many professors are:

> “Worried that new tools for detecting AI-generated plagiarism may do more harm than good,” mainly because research has shown that AI detection tools are “neither accurate nor reliable.”

For all these reasons, teams that work on Stack Overflow’s PubPlat don’t use AI detectors.

### Images, Not Art

Let’s return to the question of how people experience AI-generated content, whether they read it, watch it, or hear it.

In short? We don’t like it.

Research published in *Scientific Reports* found that:

> “People devalue art labeled as AI-made across a variety of dimensions, even when they report it is indistinguishable from human-made art, and even when they believe it was produced collaboratively with a human.”

On a similar note, a study in *Computers and Human Behavior* found that:

> “Humans perceive the same artwork as less creative and awe-inspiring when it is labeled as AI-made (vs. human made).”

Interestingly, participants in that study overwhelmingly preferred art they thought was made by humans even if the content was actually AI-generated, said Guanzhong Du, one of the coauthors.

> “No matter which one is actually made by the human artist, people prefer the artwork that is labelled as human,” Du said.
> “They think it is more creative—and when they listen to music or look at paintings made by human artists, they think they are more awe-inspiring.”

> “It’s images,” reads the top comment in a Reddit discussion of the *Computers and Human Behavior* study. “It’s not art.”

(Since I am, after all, an English major, this reminds me of Capote’s famous dig at Kerouac: “That’s not writing. That’s typing.”)

### “How Did Somebody Make This?”

When we look at AI output, we know that no one dreamed and labored to create this work in order to share it with us.

And that, evidently, matters.

An especially insightful take on the unsatisfying nature of AI art comes from cartoonist Matthew Inman, creator of *The Oatmeal*. He writes:

> “When I saw the original *Jurassic Park* in theaters, I was blown away by the dinosaurs. They were CG, but I didn’t look at the Brachiosaurus and think, ‘Gross, it’s just a bunch of tracking dots attached to a boom crane.’”

Instead, he remembers thinking,

> “How did somebody [he emphasizes ‘body’] make this?”

The Brachiosaurus was:

> “An expression of human beings making human decisions. It was the product of discipline, talent, and imagination.”

> “Seeing AI art,” he writes, “I don’t feel that way at all.”

Recall the *Scientific Reports* study we mentioned above: People “devalue” AI-made art “even when they believe it was produced collaboratively with a human.”

We perceive a yawning gap between humans producing visual effects with CGI software and humans prompting Sora to generate video clips.

DC Comics president Jim Lee announced this year that the company would:

> “Not support AI-generated storytelling or artwork: not now, not ever.”

> “People have an instinctive reaction to what feels authentic,” he told the audience at New York Comic Con.

> “We recoil from what feels fake. That’s why human creativity matters. AI doesn’t dream. It doesn’t feel. It doesn’t make art. It aggregates it.”

The *Computers and Human Behavior* study suggests that our instinctive reaction to AI content goes beyond simple distaste.

Art made by AI poses what the study’s authors call an “ontological threat” to the belief that creativity is a uniquely human quality.

Maybe that’s why many creators respond scathingly to the encroachment of AI on their work: We see AI art as infringing on uniquely human territory.

### Another Thing AI Can’t Do

Okay, okay, you might be thinking. AI doesn’t create art. But can’t I use it in my marketing campaigns?

Only if you want to alienate your customers.

It turns out that people don’t like AI in marketing and advertising content any more than they like it elsewhere.

A recent report on AI-generated marketing content found that:

> “Consumer enthusiasm for AI-generated creator work has dropped from 60% in 2023 to 26% in 2025, as feeds overflow with what viewers deride as ‘AI slop’—uninspired, repetitive, and unlabeled content.”

It doesn’t help that so many AI ads are laughably janky and/or deeply weird.

A prospective customer who reacts like Phoebe when she recognizes AI content (“My skin starts to crawl. The soul of it just leaves, and it feels unnatural”) is not going to click through.

Advertising might not be art, but AI has yet to generate any ad content that sticks with people, that makes a cultural dent, that changes the way we think about and consume a product.

There’s no *1984*, no *Just Do It*, no *Got Milk?*, and no *Buying the World a Coke*.

AI is no Don Draper at Esalen—though his junior copywriters tasked with writing ten taglines before lunch would undoubtedly have appreciated it.

### A Silver Lining—If We Want It

Perhaps there’s an upside to the ubiquity of AI art: It might teach us to recognize the value of human artists.

The *Scientific Reports* paper found that:

> “Comparing images labeled as human-made to images labeled as AI-made increases perceptions of human creativity, an effect that can be leveraged to increase the value of human effort.”

Human creativity and human effort: We’re gonna miss it when it’s gone.

Whether we’ll miss it enough to limit AI’s ability to cannibalize and aggregate the work of human writers, filmmakers, visual artists, musicians, and other creators isn’t clear.

Nor is it clear that, in stark economic terms, the value of human artists will be enough to offset the lure of apparently cheap, virtually instantaneous content to feed the machine.
https://stackoverflow.blog/2025/11/05/the-ai-ick/

Leave a Reply

Your email address will not be published. Required fields are marked *