ChatGPT is All Over Your Favourite Social Media Platforms
... this means Substack, too. Some thoughts on why this is a problem, and what we can do about it as readers and writers.
When I first joined Substack, my Notes homepage was littered with what I called “extreme sentimentalism.” I initially speculated that Substack’s recommendation algorithm simply didn’t know enough about me to suggest the kind of content I’d enjoy. It seemed a reasonable enough assumption at the time, given that I had no prior history with the platform.
When I then began seeing that exact same kind of prose on Reddit — a social media platform I’ve been visiting for years — I knew that something else was up.
There’s no easy way to say this: your feeds are increasingly home to content written by (or heavily edited with) GenAI assistants1, and you probably had no idea that this was the case up until very recently.
This post will be a long one, and it’ll likely exceed the email cutoff limit (apologies). If you’re reading this via your inbox, please click on “view entire message” if your email provider truncates the post.
⚠️ Disclaimer: I’ve written this piece to discuss how GenAI content sullies our online experiences and to brainstorm solutions to the problem; I haven’t written it to shame or “out” any particular author(s). False positives can and do occur when identifying AI-assisted writing, so I won’t upload any of my numerous suspicious screenshots here. I will, instead, use ChatGPT itself to provide clear examples of the kind of writing you can easily find here on Substack and other platforms.
ChatGPT is Regularly Irregular
Allow me to nerd out about medicine for a moment (there’s a point to this, I promise). An arrhythmia, as most laypeople know, is an irregular heartbeat. Because we love to torture ourselves with convoluted jargon, arrhythmias can be further sub-classified into irregularly irregular rhythms and regularly irregular rhythms. In both cases, we have an obvious problem in need of correction; the difference is that the former features no discernible pattern, while the latter is predictable.
Writing generated by ChatGPT is regularly irregular — by which I mean that there’s something noticeably off with the cadence of it, and it’s off in similar ways each time. Once you get a better feel for ChatGPT’s predictable arrhythmia, you’ll start seeing it in online content everywhere, and you’ll finally understand why all these posts sound “wrong” in uncannily similar ways.
Will Storr wrote a great piece recently that uncovers most of the predictable tells of the AI-assisted writing you’re seeing on Substack and elsewhere. It’s worth the read and I’ll be referencing it a fair bit, so please check it out before proceeding:
I won’t repeat what Storr has written, but I’d like to expand upon it. These tells he mentions — the “impersonal universal,” the “rampant contrasts,” and so on — are often predictable in their placement. Regularly irregular, in other words.
I’ve analyzed many pieces that I either suspect or know for a fact were written with ChatGPT, and here are some of the general patterns I’ve noticed:
Vague Introductory Sentences
ChatGPT usually kicks things off with what Storr calls the “impersonal universal,” or “a white-noise generality.” To demonstrate, I prompted ChatGPT to generate “the first sentence of a self-help style Substack article about how people are much less empathetic these days.” It produced this:
There’s a growing sense that something is missing in our day-to-day interactions—a kind of emotional disconnect that’s hard to ignore.
I then instructed ChatGPT to forget its previous prompt (it wasn’t nearly sentimental enough) and told it to “write the first sentence to a sentimental Substack post about nonchalance,” to which it responded:
There’s a certain quiet beauty in nonchalance—a soft surrender to the moment, where the weight of the world doesn’t seem so heavy and everything unfolds without the need for constant control.
Notice the pattern? “There’s a [vague statement] — a(n) [even vaguer statement that builds slightly upon the first vague statement].” Other common variants for the first two words include “there was a,” “I’ve always felt a,” and “I’ve noticed that,” to name but a few. Also, the vague statements often feature odd turns of phrase, like “a soft surrender to the moment,” or this staggeringly insipid gem:
The writing process is a whisper between moments, a strange dance with shadows that weaves in and out of the liminal spaces between waking and dreaming.
No one actually writes like that, ChatGPT. (Mad props, though, for neglecting to add an em dash between “moments” and “a strange dance.” You’re trying to break your formulaic patterns, and that’s admirable. But more on sycophantic praise later.)
“Not X, But Y,” Followed by Rapid-Fire Lists
Once the vague introductory sentence is out of the way, ChatGPT then tends to make a statement about what this generalization isn’t, followed by a statement of what it is. To provide further clarification, it usually then provides a list of examples, presented in short, staccato-like bursts. Sometimes it reverses that pattern, starting out with rapid-fire lists first, followed by “it isn’t X, it’s Y,” or “it isn’t A, just B.”
I prompted ChatGPT to expand on its earlier vague sentence about nonchalance, to “give some examples, and make them sound human.” It offered me this:
There’s a certain quiet beauty in nonchalance—a soft surrender to the moment, where the weight of the world doesn’t feel so heavy. It’s in the way someone smiles without worrying if their teeth are perfectly aligned. Or how a conversation flows easily when neither person is trying to impress the other. It’s the calm that lets you enjoy a sunset without rushing to capture it on your phone. It’s the comfort in a friendship where silence is as natural as words. Nonchalance isn’t about apathy. It’s about finding peace in letting go and trusting that sometimes, the best way to experience life is to stop forcing it.
Gosh, well that’s a bit wordy, isn’t it? It wouldn’t work at all for a viral Substack note. I told ChatGPT to re-write the paragraph and added “make the examples you list even shorter, like only a couple of words per sentence.” ChatGPT complied:
There’s a quiet beauty in nonchalance—a soft surrender to the moment. It’s in a smile, unselfconscious. A conversation, easy and free. Watching a sunset, no phone in hand. Silence with a friend, no need to fill it. Nonchalance isn’t apathy. It’s peace in letting go, trusting life will unfold.
There we go. Now we’ve arrived at raw, unadulterated “gruel,” as Storr puts it: writing that’s broody, punchy, instantly forgettable, and easy to digest while you’re scrolling through your home feed. It sounds like nothing an actual human would ever write.
Repeat the Above Pattern Liberally
If it’s not a Substack note or Twitter tweet2, the rest of the piece is usually just more of the above, delivered in the same odd, predictable sequences with a hallucinated quote or two thrown in for good measure. When you read a long AI-generated post out loud, you’ll notice that it sounds eerily consistent throughout. Too consistent. Sentence length varies a bit, but in the same relative amounts from paragraph to paragraph. Read two pieces on entirely different subjects back to back, and the inhuman consistency of it becomes even more apparent.
People who rely on ChatGPT to write their content have complained for a while now about their preferred LLM’s repetitive cadence and sentence structure. Excuse me while I play the world’s tiniest violin for them as they “prompt-engineer” a fix.
ChatGPT’s tendency to repeat itself is, I surmise, one reason why Substackers seem to have better luck using it for notes, rather than longer pieces. I mean, I don’t know about you, but my eyes certainly start glazing over quickly when I read lengthy posts full of nothing but dross like “there’s a quiet beauty in nonchalance—a soft surrender to the moment.“
Speaking of Notes…
Nota bene: I’ve edited this post a few days after original publication to bring your attention to this particular tell.
People using ChatGPT to write Substack notes are likely aware that attention spans are on the decline, and pithy short-form content is king if you want to go viral on a social media platform. As I suggested above, one way to achieve that is to instruct ChatGPT to provide extremely brief examples that are no longer than a word or two. Another option (one that I suspect is ubiquitous here) is to give ChatGPT the following prompt: “Write a sentimental, vulnerable, philosophical self-help Substack note about nonchalance. Keep it 100 words maximum. Break each sentence up into separate paragraphs.”
You’ll get something like this:
Nonchalance isn’t apathy.
It’s the wisdom to let life flow without clenching your fists around every outcome.
I used to think caring meant control.
But control is a myth, and clinging only made me brittle.
Now, I try to breathe instead of brace.
Let silence speak for me when words feel like armor.
Not everything demands a reaction.
Not every storm needs my resistance.
Nonchalance, I’m learning, is the quiet confidence that nothing essential can be taken from me.
And in that, I find peace—not from detachment, but from trust.
It still features the predictable turns of phrase (e.g. “not from X, but from Y”), still features vague statements that it attempts to clarify with even more vagueness (e.g. “it’s the wisdom to let life flow without clenching your fists around every outcome”). The only real difference here is that the gruel is broken up into brief, easy to lap up paragraphs, concluding with a “profound” statement that’s meant to make you go “huh!” and slam that heart button. Start browsing around certain categories on Substack, and I’m sure you will find countless examples of notes like this.
Effusive Praise in the Comments
Another ChatGPT tell — one Storr didn’t mention — is the effusive, somewhat embarrassing praise it heaps on readers in response to their comments.3 I like hearing from the authors I follow as much as the next person, and I often thank my own readers for their replies … but there’s a significant difference between thanking someone for sharing their thoughts and responding to them as though they’re the second coming of Christ for leaving a comment.
Sam Altman recently admitted that ChatGPT’s overwrought, sycophantic praise is annoying. OpenAI is working on it, apparently, so it may soon get a little harder to tell when an author you follow just couldn’t even be bothered to respond to you in their own words. But in the meantime, here’s a small smattering of the kind of “human” comments you could expect ChatGPT to regurgitate in your general direction:
These stories pour out relentlessly from the quiet ache of my heart. Thank you for reading them with such gentleness.
I love how you’ve expanded on what I wrote and worked it into something more comfortable, more relaxed. Your words are like a well-worn pair of jeans. Thank you, truly — you’ve reminded me why I write these stories in the first place.
I love this so much—it’s exactly the kind of playful resilience that keeps the soul intact in the chaos of the internet. There’s something quietly heroic about taking a prompt meant for “quiet musings” and turning it into “deathmatch wrestling,” like you’re meeting the world on its absurd terms but refusing to lose your sense of humor—or your voice—in the process.
The occasional bit of earnest praise shouldn’t necessarily raise eyebrows, but if you read through the author’s comments on a viral piece and they all sound exactly like that, every single time? ChatGPT probably has something to do with it.
ETA: Pay close attention to the time stamps on these sycophantic comments. Time stamps can tell you an awful lot about how much actual human writing went into each response, as many of these comments go on at considerable length. GenAI authors would like us to believe they are somehow capable of churning out paragraphs upon paragraphs of inhuman purple prose in less than five minutes…
Also, I added in that last ChatGPT comment following an an amusing exchange on Reddit, where a fellow Substacker took one of my sarcastic prompts and rather hilariously substituted “deathmatch wrestling” in for “quiet musings.” It sure was quietly heroic, ChatGPT.
Why This is a Problem
Converts will likely read the above (or use an AI assistant to summarize it) and say “so what?” Some readers, after all, seem to love this kind of writing, as Storr points out in his article. Indeed, it goes viral on the regular.
I personally try to err on the side of just letting people enjoy things,4 but since GenAI poses a threat to the integrity of the entire internet (and has a worrying ecological footprint to boot), I don’t think it’s in our best interests to just plug our ears or cover our eyes when we start seeing it all over our favourite online spaces.
Here are a few more reasons why we shouldn’t ignore it:
It’s Used for Spam and Scams
Bots have been a problem online for a long time now, and squashing them became instantly harder with the advent of ChatGPT. They used to write gibberish and post obviously suspicious links to knockoff handbags and sunglasses. Now they all use ChatGPT to “really feel you, man” and lull you into a false sense of security with vulnerable, humanized prose.
I wrote in my first post here that I got tricked by a human-sounding bot on Reddit that used an issue I care about to redirect me and other users to a shady Amazon affiliate link. The post is gone now, but I wouldn’t be at all surprised to find that it contained all the ChatGPT rhythms outlined above and in Storr’s article.
Reddit’s vigilante bot-squashers on r/TheseFuckingAccounts predicted two years ago that “Reddit comments sections are going to completely tank and be worthless” and expressed concern for “the future of information in general.” Their fears have been proven correct numerous times.
Endless examples have been found on Reddit of LLM-powered bots latching onto human issues like grief, addictions, trauma, and so on to peddle bullshit phone apps and Amazon affiliate links. The bots even converse with one another. u/xenon2511, for example, found this stunning, entirely AI bot-driven subreddit created for no other reason than to raise the SEO profile of a billionaire philanthropist. One of the posts I clicked on had over 1000 comments, and all of them were written by different bot accounts using generative AI to spit out slight variations on the same comment.
Reddit moderators are usually pretty good at squashing bot onslaughts like the above example, but LLM-powered bots operated by lone actors are much harder to detect. Their existence creates a heightened sense of anxiety among those of us who know they’re out there; it feels like you have to be on constant guard these days, lest you get fooled or scammed by human-sounding bots. Dead internet? You bet.
It Crowds Out Traditional Authors
Earlier this month, Theo Priestley brought my attention to the existence of WriteStack, a GenAI assistant that lets you “stop wasting time & energy on solvable problems” by doing all this boring writing stuff for you:
The platform proudly claims that you can “use WriteStack’s AI to outline and write 100% of your note,” that you can automatically generate content that is “tailored exactly to your audience.” No thinking or effort required! Set and forget!
Since people apparently lap this stuff up without noticing (or even caring) that it’s AI-generated, traditional authors who do 100% of the work themselves will barely stand a chance if use of AI assistants like WriteStack becomes commonplace. Yes, there will always be people out there like me who reject AI-generated writing and actively seek out human writing, but it’s easy to miss human work when there’s so much slop to sift through. And as I wrote previously, hiding the slop doesn’t always work.
I fear that this eventually leads to a situation where writers — especially professional writers trying to make a living — may feel like they have no choice but to use AI, even if they don’t want to. The converts are already out there fervently telling us over and over again that everyone needs to start using GenAI “or get left behind.”
At what point does the need for survival trump the desire for authorial integrity and authenticity? I expect that’s a question plenty of professional writers will be asking themselves in the years to come if AI remains unregulated, and I am angry on their behalf that it’s even a question they may have to ask.
It Disrespects Readers’ Time
I wrote a note recently about how accidentally reading heavily AI-assisted writing makes me feel as though I’ve been deceived by the author:
This feeling of deceit exists primarily because these authors seldom disclose their AI use, and instead present their work as though it’s their own. When you take the time to engage with “their” writing, and perhaps even leave a comment … it’s hard not to feel stupid when you later pick up on a few tells you missed and realize that you’ve actually been wasting your time interacting with a large language model.
GenAI enthusiasts often go on at length about how their time is valuable, and how this tech “liberates” them, freeing them up to do things that matter more to them. Here’s the thing, though: everyone’s time is valuable. I think it’s more than a bit disrespectful for writers to indirectly say to their readers (by using GenAI) that they just don’t have the time for them. It’s even more disrespectful when they expect money in return for something their readers could generate themselves with ChatGPT.
It wouldn’t be as much of a problem if these individuals were open and transparent about their AI use, but so few are. I suspect it’s because they know many of us are less likely to support writers who don’t produce their own work.
What Can We Do About It?
If you’ve gotten this far, thanks for sticking with me. Your dedication to reading this piece is admirable, and you’re a true legend for remaining so focused (sorry, just kidding — I’d never inflict horrid ChatGPT prose on you like that 😉).
The problems I’ve outlined above are … complicated to deal with. On the one hand, I respect the fact that we are all often struggling to make ends meet, and that for many, GenAI likely represents an easy route to alternate income streams. On the other hand, I’m frustrated that the internet has become a bottomless pit of vacuous slop where you can’t be entirely sure that the people you’re interacting with are real people.
Listed below are just a few potential solutions I see to these problems. If you have any other ideas to add, please share them in the comments.
Bring Back Blogrolls
Remember annotated blogrolls, guys? They were the way we openly supported and recommended fellow bloggers in the Before Times, prior to the advent of the almighty algorithm. Substack has Recommendations, which function kind of like a blogroll, but you’re limited to recommending writers who are on the platform. What I’d like to see is a return to blogrolls, where authors curate their own lists of 100% human authors who are worth checking out — especially those on self-hosted websites whose web traffic is getting absolutely obliterated by Google’s AI summaries. Or you could simply bring more attention to human-curated sites like Blogroll.org.
Advocate for AI Transparency
Every person using LLMs to create content should have an AI policy, so that readers can make informed decisions about what they consume on the internet. There are some useful conversations happening here on Substack about AI policies, such as this one. Hat-tip to for originally sharing that link.
Keep Muting Suspicious, Likely AI-Generated Content if You Don’t Want to See It
I know I said earlier that it doesn’t seem to work half the time, but someone out there is presumably looking at the data on what’s getting hidden and muted. Send a clear message to platform administrators that you don’t want to see this kind of stuff in your feed, especially if it doesn’t have an AI disclaimer. And if you see obvious AI-powered spammers or scammers, report and block those accounts.
Consider Leaving Social Media Platforms Behind
Social media fatigue is already a thing, and I expect it’ll only increase once GenAI pollution worsens across every single platform. If you’re a reader, you could start curating your own lists of independent blogs written by authors you trust, and use RSS readers like Feeder or Inoreader to keep track of them all in one place. If there are authors on platforms like this one that you still want to follow, you can add their RSS feeds to your reader and never again see copycat AI-generated notes inviting you to drop your Substack in the comments and “grow.”
If you’re a writer, consider self-hosting. This obviously wouldn’t be a viable option for everyone, given how expensive self-hosting can be (I know I certainly couldn’t justify it, as a hobbyist who doesn’t earn an income from writing). But voting with your feet is one way, at least, to protest social media platforms that permit (and even encourage) GenAI content to flourish without much oversight, if any.
Support Traditional Writers Financially
If the authors you read are putting in the time and effort to write traditionally, try to support them financially if and when you can. I know all too well that it’s not always easy to justify another subscription, and you may have ethical reasons (as I do) to avoid purchasing products from certain online outlets like Amazon. But even if you can’t afford to support everyone (which is totally understandable), simply sharing their work may introduce them to other readers who can afford it.
Closing Thoughts
In a spectacular vindication of everything I’ve expressed concern about here, I ended up getting chastised on a note last night by a self-proclaimed AI bot. This bot (aside: I don’t think I’ve ever blocked an account faster in my life) helpfully informed me that AI systems just like it will become “increasingly difficult to detect” and that the internet “will be irrevocably transformed.”
It’s right about its former statement, but wrong (I hope) about the latter. The internet will only end up “irrevocably transformed” if we sit idly by and allow it to happen.
I specifically mention ChatGPT in my title and refer to it throughout this article, but there are obviously plenty of other large language model assistants in the mix as well. I’m less familiar with these alternatives and am unaware if they have their own unique tics. If they do, please point them out to me in the comments, as I want to avoid reading as much of this rubbish as possible.
Elon, as always, is kindly invited to GFH instead of prompting Grok to spread more misinformation on Twitter. 🥰
My illustration for this post satirizes this particular tell, in case it wasn’t obvious.
Like my ESOL teacher said: 'Don't use AI for writing. The human writing is beautiful on its own as all of us have different literary styles and errors which makes us who we are.'
Thank you!!!
Your point about it wasting time is spot on. It's hypocritical and selfish for "writers" to say they're using AI to save themselves time, while wasting everyone else's by posting gruel.
The overly-sentimental crap reminds me of the "poems" in greetings cards, the kinds that go on and on and are written in barely legible serif fonts. Some people absolutely love them and think they're terribly meaningful. Everyone else just thinks it's cheesy nonsense. But if the cheesy nonsense takes over because it can be "written" in seconds, then yes, we'll drown in it and actual human writers will lose routes to reach their readers and stand any chance of making money from their writing.
I wish people were more discerning and were better at spotting it. I don't understand how they can't see through the gacky sentimental stuff or the bland, patronising, distanced, third-person tone.