Trigger warnings, censorship, and sensitive content, explained
The cyclical nature of flagging, censoring, and outsmarting triggering topics
Welcome to Gen Z Translator, where I break down trending topics on Fridays. If you’re new, you can subscribe here and follow me on Threads or X. Views are my own.
Note: Not to spoil my personal stance, but the following includes discussion of sensitive topics that may be triggering to some.
I was reading my email the other day when I came across a newsletter from a writing contest judge. In the post, she gave advice for ways to stop opening your novels if you want to succeed in the competition.
In her third point, she said, “Over 50% of entries opened w/ vi.ol.ent imagery (sorry for the censoring - email rules and all that).”
This is nothing against this particular writer – I enjoy her writing advice – but it does beg an interesting question. To what extent do we need to censor ourselves online?
When trigger warnings first came around, they were immediately thrown back in my generation’s face as we were labeled “woke Gen Z snowflakes.” Pros of the practice include prioritizing mental health by flagging traumatic topics that could bring up uncomfortable feelings for someone.
Trigger warnings have since split into two different categories, abbreviated as “tw” and “cw.”
The Mix, a mental health charity, describe them as the following:
“Content warnings…should be used to describe something that might upset readers and make them feel bad e.g., blood and nudity. Trigger warnings…should be used to prevent exposing someone with past trauma, to something that might insight a physical or mental reaction.”
In the creative writing community, “tws” are for sensitive content that appears explicitly on page while “cws” are references to sensitive topics that occur off-page. This helps set the expectation for the reader: If they don’t want to read about, say, a parental figure dying, they don’t have to, or they can be prepared knowing it’s going to happen.
The warning page for some books nowadays is an entire page long.
At some point, though, tw-type censorship evolved from protecting the audience to outsmarting an algorithm. I would argue, perhaps, that trigger warnings may have even encouraged outsmarting the algorithm.
Like with “vi.ol.ent,” the author wrote it as such so the email algorithm wouldn’t deprioritize her content for mentioning violence. (If email platforms are censoring words as ordinary as “violent,” then this Substack edition is definitely screwed).
Before we go any further, you need to understand the concept of “shadowbanning.” Tatum Hunter explained it for The Washington Post as a way “to describe what users claimed was sneaky activity by social media companies to hide their content or opinions.”
“These days, companies tend to prefer the term ‘recommendation guidelines,’ while some tech critics use ‘algorithmic suppression.’ Because algorithms are decision-making machines, showing one post means not showing another. But companies say they also limit content for users’ safety and sensibilities,” Tatum wrote.
On platforms like TikTok and X, audiences use phrases like “un-aliving yourself,” “seggs,” and “we.ed” to replace inappropriate words that could get content shadowbanned1. Numbers have also been used to throw off detection systems, like in “ab0rti0n” or “murd3r.” And obviously, there’s the classic as*trik to censor any word you’d like. Emojis like 💀🔪 can help fill in darker stories. (There’s a reason the 🔫 emoji became a watergun). The other week, I walked through how eating disorder content can be abbreviated online as “ed” to outsmart algorithms, too.
Content moderation is a huge, huge debate in modern times. If you have never heard of Section 230, let me brief you.
In the 1996 Communications Decency Act, Section 230 says “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
“That legal phrase shields companies that can host trillions of messages from being sued into oblivion by anyone who feels wronged by something someone else has posted — whether their complaint is legitimate or not,” Barbara Ortutay wrote for 2The Associated Press. “Section 230 also allows social platforms to moderate their services by removing posts that, for instance, are obscene or violate the services’ own standards, so long as they are acting in ‘good faith.’”
This is where you’ll hear complaints like, “I can say whatever I want online, free speech!” when in reality, that doesn’t always apply. Social media platforms moderate their own content and can remove anything they deem against their guidelines. They can change their own guidelines, too.
Officials from Instagram and Facebook owner, Meta, are even admitting to too much content moderation at times, while Twitter/X loosens their regulations.
This article originally started as an exploration into trigger warnings as opposed to content moderation and censorship, but as I was doing my research, something occurred to me – maybe these three concepts were more connected than I thought. Hear me out.
At the beginning, the internet is loosely moderated. No one is pulling down harmful posts under Instagram hashtags or pornographic material on Tumblr. Chaos reigns supreme, and some people seek out this sensitive content while others are put off by it.
This changes when content moderation ramps up. Suddenly, audiences have to find a way to outsmart the algorithm. This is where your asterisks (e*ting disorder), your abbreviations (ed), and your insider nicknames (Anna as in anorexia) come in. Threatened communities form even stronger communities. You have to know the lingo to be in on the lingo, and so on.
But let’s go back to that second group of people, the ones who were displeased with the direction the content was heading. They know the inside lingo because maybe at one point they sought it out or were exposed to it, but now they’ve changed their minds. They don’t want to see it anymore – they want out. And how do you avoid content an algorithm thinks you want?
You start flagging it.
This feeds the content moderation beast, further disbanding the community by removing harmful posts, de-monetizing inappropriate topics, or shadowbanning sensitive ones.
Some people still want the ability to talk about these things, though, so they take the moral high ground and start flagging their own content, tw: discussion of eating disorders. In an ideal world, those who are okay reading about those topics can continue if they please, and those triggered by the content can walk away. The world is right, and the internet is happy. (Haha).
This is all just a theory of mine – a series of logical conclusions I’m jumping to about how content moderation enabled sensitive content which encouraged trigger warnings – but I still want you to consider the correlation.
Would society have felt a need to provide such expansive and intense trigger warnings if that content hadn’t first been incredibly, shockingly, uncontrollably triggering in the first place?
On the other hand, censoring certain words can take away from their affect3, as well as people’s lived experiences. If someone wants to speak out about their sexual assault, can they only do that by using the abbreviation “SA”? What if talking about a personal topic gets them shadowbanned entirely?
“A v v gentle and kind reminder that you never know what people are going through,” Elena Ricci wrote in an Instagram Reel caption. “Whenever people find out I lost my bf to s*icide straight after losing my grandma and uncle, I get a lot of ‘you look like you’re handling it so well’ and I know that a lot of people are also in the same boat.”
Informing communities is one of the most important tenants of good journalism. We value free speech for a reason, after all. Censoring news is censoring a universal need.
On the trigger warning side, I’d argue flagging everything that could potentially be provocative may be more harmful than letting someone come across a quick paragraph that references, let’s say, violence. And what happens when someone’s very identity is considered “triggering?” For example, I’m a cancer survivor, but cancer could be considered a triggering topic.
According to Noam Shpancer at Psychology Today, “Trigger warnings have been slapped on general language content (e.g., adult humor) medical content (e.g., human bodily functions), and stigma-related content (e.g., depictions of racism), and the concerns they purport to address have branched beyond re-experiencing traumatic symptoms, and onto the possibility of experiencing emotional distress or mere discomfort.”
“Opponents argue that trigger warnings coddle and infantilize adults, and that they facilitate avoidance and/or inflate morbid and prurient curiosities, thus increasing rather than decreasing emotional turmoil and anxiety,” Noam wrote. “In promoting avoidance of challenging material, opponents argue, trigger warnings also run counter to the clinical literature, which shows that trauma is best overcome through exposure rather than avoidance.”
I’m curious, what do you think about trigger warnings? And tell me, were you able to read this newsletter, or was I shadowbanned?
Read my last explainer: I rode in a self-driving car so you don't have to
My weekly roundup:
🎶 What I’m Listening To: “friendship” by carwash
🎞️ What I’m Watching: Survivor finale! And The Diplomat. Meh.
🔎 What I’m Reading: “Blue Lily, Lily Blue” by Maggie Stiefvater
📱 What I’m Scrolling: Cookie, the gingerbread man 🥺
Read the full Gen Z Dictionary here.
shadowbanned: when a social media platform algorithmically tanks a user’s posts due to the nature of the content
*Transparency clause: I work on contract for The Associated Press
I’m referencing this dictionary definition here – “affect: emotion or desire, especially as influencing behavior or action.”
Meh to The Diplomat?! I’m OBSESSED