Content Warning

with Kate Koenig

Published October 17, 2025
View Show Notes

About This Episode

Host Simon Adler talks with law professor Kate Koenig about how social media content moderation has shifted in recent years, especially under the influence of TikTok's proactive, algorithm-driven model. They contrast earlier "keep it up unless we have to take it down" approaches with newer systems that pre-screen and algorithmically promote or bury content, raising concerns about prior restraint, invisible censorship, and concentrated power over public discourse. The episode also revisits controversies like the Hunter Biden laptop story and COVID-19 lab leak discussions, explores the idea of platforms as "platform islands" or camouflaged broadcasters, and considers the future "productification" of speech.

Topics Covered

Disclaimer: We provide independent summaries of podcasts and are not affiliated with or endorsed in any way by any podcast or creator. All podcast names and content are the property of their respective owners. The views and opinions expressed within the podcasts belong solely to the original hosts and guests and do not reflect the views or positions of Summapod.

Quick Takeaways

  • TikTok's proactive, algorithm-first approach to content promotion has reshaped how major platforms think about moderation, shifting from reactive takedowns to curated feeds that emphasize certain types of content.
  • The Chinese-origin model behind TikTok emphasizes control over what people see by heavily pre-screening and boosting only content within acceptable bounds, which obscures what is being silently excluded.
  • American platforms like Facebook have moved toward TikTok-style recommendation and downranking, making invisible forms of censorship and prior restraint more central than explicit deletions.
  • High-profile episodes like the suppression of the Hunter Biden laptop story and early COVID-19 lab leak discussion illustrate how platforms can overcorrect under political pressure and later redefine their moderation strategies.
  • Users increasingly choose platforms that match their preferred emotional and ideological experience, creating "platform islands" rather than simple algorithmic filter bubbles within a single network.
  • As owners like Elon Musk visibly insert themselves into what users see, large platforms begin to resemble broadcast networks in disguise, with a few individuals wielding outsized influence over public discourse.
  • Content moderation has evolved from a customer service function into a powerful tool for shaping public opinion that is highly valuable to political parties and governments.
  • Koenig predicts more automated, standardized moderation that narrows the range of voices online, turning speech into a managed product rather than an open, messy expression of society.

Podcast Notes

Introduction and setup: revisiting online speech and content moderation

Studio setup and introduction of host and guest

Simon introduces himself and the episode[1:45]
Simon Adler identifies himself as the host and notes this is Radiolab
Guest self-introduction[1:49]
Kate introduces herself as "Kate Koenig" and says she is a professor at St. John's Law School

Previous reporting on Facebook rules and free speech

Earlier collaborations between Simon and Kate[1:54]
Simon says he has talked to Kate a bunch over the years and they did stories about Facebook's rules for what can and cannot be posted
Tone and personality around discussing Facebook[2:02]
Kate jokes about not being allowed to say the F-word about Facebook because her parents scolded her, and Simon responds that he prefers to swear on the radio
Focus of earlier stories: Facebook's content rules and free speech ideals[2:30]
They previously covered the origins of Facebook's rules and how complex they are, but were really exploring how the ideal of free speech plays out across different spaces in society
Simon lists examples of speech spaces: public squares, lightly regulated broadcast TV, and private spaces, and they asked where social media fit in

Why revisit free speech and moderation now

Recent news rekindling free speech debates[3:06]
Simon notes that in the past month, free speech has been in the news again, mentioning a late-night host taken off air indefinitely
Renewed questions about government pressure and speech control[3:18]
He frames current debates as questions of who can say what, where, and how much pressure the government can or cannot exert on speech
Calling Kate back to discuss the online angle[3:30]
Simon explains he called Kate to see how these issues are playing out online now
Kate says the current problem is how to stop billionaires and authoritarian governments from twisting platforms into censorship machines or political propaganda tools

From early content moderation to the rise of TikTok

Setting up the question: what has actually changed in practice

Simon asks how keeping content up or taking it down has changed since 2020[4:17]
He wants Kate to lay a foundation before diving into current conflicts

Kate's main answer: the rise of TikTok as the key change

Scale and speed of TikTok's growth[4:31]
Kate says the main thing that has changed from 2020 to 2025 is TikTok's rise, noting it caught up with about 12 years of Facebook's growth in roughly two years
TikTok's different approach to running content moderation[4:41]
She explains that when content moderation started at Facebook, Instagram, and YouTube, the assumption was to avoid unnecessary censorship and keep content up unless reported harmful
Platforms created rules to limit harm while preserving what they called "voice" (their term for free speech), using a "keep it up unless we have to take it down" approach
TikTok's origin in a different speech culture[4:41]
Kate notes TikTok comes from China and a censorship-oriented, authoritarian CCP culture, which shapes its algorithm and moderation style
She contrasts TikTok with a free-speech default, saying TikTok takes the position that it can determine what people see and say

Push-up versus pull-down: TikTok's algorithmic emphasis

Clarifying TikTok's strategy is not simple mass takedown[5:36]
When Simon suggests TikTok just takes down tons of content, Kate clarifies that TikTok pre-screens a huge volume and filters out what is outside certain political parameters
She says this makes TikTok less likely to cause "negative interaction effects"
Simon's "push up" framing and Kate's agreement[6:25]
Simon describes TikTok as choosing to push some things up instead of pulling others down, and Kate calls that a perfect way of thinking about it
Kate says TikTok pushes up milquetoast, happy, feel-good, apolitical content and uses downranking or shadow banning to avoid promoting other material
Algorithmic iteration and addictiveness[6:30]
Kate notes that TikTok's algorithm constantly improves based on user behavior signals, providing a very addictive and expectation-meeting product

Format over content: what TikTok optimized and how it differs from Facebook

Simon and Kate's personal distance from TikTok

Both admit they do not actively use TikTok[7:13]
Simon confesses he has spent maybe five minutes on TikTok and does not have the app; Kate says she also does not use it, despite studying online speech for a living
Kate notes she still sees TikTok content constantly because videos are cross-posted elsewhere, so TikTok is in her life without her being on the platform

TikTok's innovation: making the banal compelling

Contrasting assumptions about engaging content[6:43]
Simon recalls being told Facebook wanted highly emotive, reactive content to keep users engaged and asks what they misunderstood if TikTok succeeds with banal material
Kate's explanation: the hook is the video format itself[8:08]
Kate says TikTok figured out that video itself hooks people so it does not matter as much what the specific content is
She describes people watching things they didn't know they were interested in, such as videos of certain types of couples who look very different from each other interacting, which viewers nonetheless find fascinating
Packaging versus information type[8:32]
Simon summarizes the distinction: Facebook figured out the kinds of information that keep you there, while TikTok figured out how to package almost any information to keep you there
Kate says that framing is one way of thinking about it and notes that advertisers have long used similar techniques; TikTok just represents a different business and product model

From TikTok to prior restraint and its influence on American platforms

TikTok's controlled ecosystem and the concept of prior restraint

TikTok's more controlled informational ecosystem[8:56]
Simon observes that if TikTok pushes up only content that fits within its bounds, it creates a more controlled environment, and he asks Kate for the right word
Kate agrees it is controlled and calls it even more dangerous in some ways
Defining prior restraint in American First Amendment law[9:22]
Kate says the ultimate in censorship in American First Amendment law is prior restraint, which means censorship before something is ever published
She contrasts prior restraint with redaction, explaining that when something is redacted on Facebook, the removal itself is evidence censorship occurred, whereas prior restraint leaves no trace
TikTok as an example of invisible prior restraint[9:24]
Kate notes that with TikTok, users never know what they missed or what they were kept from seeing, which embodies the danger of prior restraint

The "TikTokification" of American social media

Movement of U.S. platforms toward TikTok-like moderation[10:09]
Kate says that in the last five years, American social media has moved toward TikTok's approach to content moderation
She characterizes this period as one where the industry has recognized the power inherent in moderating content

Facebook's shift away from fact-checking and political signaling

Mark Zuckerberg's January 2025 announcement

Ending the fact-checking program[11:24]
Kate identifies a major sea change as Mark Zuckerberg's January 7, 2025 announcement ending Facebook's fact-checking program
Zuckerberg framed the change as reducing mistakes and censorship, and said the company had reached a point of too many errors and too much censorship
Shift to a community notes-based system[11:32]
Kate says Zuckerberg proposed moving to a community notes-based system of content moderation
She quotes his stated goal of getting back to roots, focusing on reducing mistakes, simplifying policies, and restoring free expression on Facebook and Instagram

How significant was fact-checking in practice

Kate's view that fact-checking was limited and symbolic[12:08]
Kate argues fact-checking was not extensive; posts were removed days after being flagged and the program was small, despite a public commitment driven by concerns about misinformation and disinformation
She describes the announcement as more of a signal aimed at a particular audience and a party that felt victimized by big tech censorship

Big tech biases, conservatives, and specific censorship flashpoints

Articulating the conservative complaint

Simon formulates a strong version of the censorship concern[12:42]
He says the strongest defense for conservatives is that during the pandemic there was a party line for how to talk about pandemic origins
Kate responds that such issues pre-date the pandemic

Hunter Biden laptop story as an example

Description of the incident[13:10]
Kate cites the Hunter Biden laptop scandal, where links to the New York Post story were taken off Facebook and Twitter shortly before the 2020 election
Facebook's motivation and potential overcorrection[13:35]
She explains that, after being criticized for their role in the 2016 election, platforms resolved not to allow content that could be foreign influence to remain, leading them to overcorrect in 2020
In hindsight, Kate calls it a very hard call and probably the wrong one

COVID-19 lab leak debates and comparison with China

Wuhan lab leak as another contentious topic[14:05]
Kate mentions the Wuhan lab leak as another "insane" issue that remains debated, noting that we are still talking about it, which shows it was not fully censored
Distinguishing Western platforms from Chinese-style erasure[14:52]
She contrasts Western platforms with China, saying that in China people might not even know who "Tank Man" is because no photos are published, whereas in the U.S. contentious content can still be found
Simon acknowledges the point that, despite controversy, these topics were not erased in the way they might be under stricter state censorship

Speculating about current and future government influence

Would similar takedowns happen now?[14:41]
Kate says her honest belief is that the current administration would quickly put the platforms in line and would not hesitate, indicating she sees these struggles as about one group's speech rather than free speech in general
She asserts that no recognizable free speech notions are emerging from the current administration and emphasizes that content moderation has become a vector of power

Content moderation as a vector for power and emergence of platform islands

From filter bubbles to self-selected platforms

Early concern about algorithmic filter bubbles[19:55]
Kate recalls that when she first wrote about this around 2017-2018, people were very concerned about algorithmically driven filter bubbles
Shift toward platform choice over internal bubbles[19:27]
She says filter bubbles did emerge, but now people increasingly choose platforms themselves based on expected content, so separate "platform islands" have formed
She emphasizes that owners of each platform can push up whatever content suits their ideological ends

Different emotional ecosystems: TikTok and X as examples

Platform islands defined by emotional tone[20:21]
Simon suggests that China and TikTok promote soothing, non-riling content, while X (formerly Twitter) seems like the mirror image, riling users up constantly
He imagines an environment where users go to one platform island for a specific emotion and to another for a different emotional experience
Kate's analogy to entertainment choices and moderation incentives[21:21]
Kate likens this to how people choose movies by mood (avoiding horror if not in the mood) and says such emotionally targeted approaches are easier and cheaper to moderate
She notes this approach reduces reactive content moderation costs because there are fewer user reports requiring human review

Revisiting the metaphor for social media: mall, public square, or broadcast

Legal metaphors: mall versus public square

Importance of classifying Facebook's space in First Amendment law[21:45]
Simon recalls that earlier they debated whether Facebook was more like a mall or public square because First Amendment doctrine treats private spaces differently
He asks what the right metaphor is now, given the changes Kate described
Kate's ongoing preference for the mall metaphor[22:07]
Kate says she has always liked the mall metaphor and notes it has a "weird squirrely" place in First Amendment case law

Simon's updated metaphor: social media as camouflaged broadcast

Convergence between social media and broadcast media control[22:39]
Simon says he now sees social media as basically broadcast again, where networks like ABC and NBC can cancel shows at any time and decide the lineup
He argues that social media is like broadcast camouflaged as organic user-generated content, because individual speech is aggregated and curated from above
Example: Elon Musk's presence in user feeds[22:57]
Simon cites Elon Musk showing up in his feed despite not following him as akin to Rupert Murdoch appearing between Fox News commercials to directly tell viewers what to think
He emphasizes that this kind of owner insertion is not subtle
Kate's view on the overt use of platform power[23:26]
Kate says one of the scariest parts of recent months is that owners are no longer hiding behind algorithms but are openly using their power and forcing everyone on their platforms to see their messages

Regulation, media concentration, and the role of governments

Challenges of regulating platforms without controlling speech

Kate's caution about regulatory regimes targeting speech[23:41]
Kate notes that using regulatory regimes to control how platforms speak is problematic because democracies do not want governments controlling speech, given the risks of authoritarianism
She underscores that this makes changing platforms difficult, even though they now clearly wield significant influence

Simon's openness to regulating aggregation and amplification

Distinguishing regulation of content from regulation of curation[24:42]
Simon says he has never favored federal government molding Facebook's moderation rules, but if platforms no longer resemble public squares and instead act like camouflaged broadcasters, he is open to some regulation
He suggests potential regulation might focus not on what users are allowed to post but on how content is aggregated and pushed out

Existing media regulators and the new digital concentration problem

Kate points out that Western states already regulate media concentration[25:02]
Kate says every Western state has some type of media regulator to avoid two or three people controlling all media
Are a few online platforms analogous to a concentrated media system?[25:02]
She notes that although the internet has infinite content in theory, in practice people rely on a small set of main platforms for news, daily interactions, and a sense of public conversation
She warns that if these main platforms are controlled by three people who are all friends of the president, that is a serious problem
How journalists treat platforms as real spaces[26:00]
Kate points out that journalists go to X, Bluesky, YouTube, and TikTok and report events there as if they were real-world happenings, even though these spaces are controlled by platform owners
She argues this makes coverage less reflective of the real world because it centers on curated environments

Content moderation as industry power and the future "productification" of speech

Content moderation recognized as a driver of political and economic power

Shift from customer service issue to tool for shaping opinion[26:14]
Kate says the industry has realized content moderation is not merely a customer service issue but a huge force for shaping public opinion
She describes the value of controlling what is pushed up or taken down as comparable in importance to oil and guns for political parties and governments
Scalability and microtargeting of political messaging[26:46]
Kate notes that platforms know how to market to users no matter how niche, making this power scalable for both money-making and mind control

Self-reflection on earlier, more sympathetic coverage of Facebook

Simon questions whether they were "useful idiots"[27:18]
Simon recalls that unlike many critics who simply vilified Facebook, he and Kate tried to understand the problem, and he now asks whether that stance missed something or made them fools
Kate's response: it was not inevitable at the time[27:44]
Kate says people have suggested she was a useful idiot for Facebook, but stresses that outcomes were not a fait accompli when they first discussed solutions
She argues that all solutions share a core flaw: these are for-profit companies that ultimately do what they want as circumstances evolve

Is content moderation "dead" and what replaces it?

Evaluating investment in trust and safety[28:48]
Kate says there has been controversy over whether platforms will continue to invest in large trust and safety departments when they can "TikTokify" feeds instead
Automation and narrowing of acceptable speech[28:31]
She predicts we will increasingly see automated content moderation that does not represent the edges of society or the range of voices present at the beginning of the internet
Kate summarizes this trend as a "productification of speech," implying that speech will be managed and packaged as a standardized product

Closing thought experiment: perfect art and emotional control

Simon's sci-fi idea about personalized emotion-inducing art

Description of the hypothetical perfect artwork[30:23]
Simon describes an imagined perfect piece of art that scans your face, pulls information about you from the internet, and generates an image tailored to evoke a specific emotion for you
He imagines that on different days it could be tuned to different emotions, like happiness on Tuesday and sadness on Wednesday, yet each viewer experiences the targeted feeling
Linking emotional control to broader social control[30:53]
Simon suggests that if you can reliably evoke emotions in individuals at will with such personalized imagery, you could theoretically control everybody

Kate's reaction and closing credits

Kate's literary comparison and playful suggestion[31:34]
Kate says the idea sounds like a Ted Chiang story and jokes that Simon could ask AI to write it if he is busy
Production credits and acknowledgment[32:10]
Simon notes the story was reported and produced by him with original music and sound design, and thanks Kate Klonick, then says Radiolab will be back next week
A separate voice, identified as Noor Sultan from New York, reads staff credits listing Radiolab's creators, co-hosts, staff, and fact-checkers

Lessons Learned

Actionable insights and wisdom you can apply to your business, career, and personal life.

1

Control over what is algorithmically promoted or buried online is a powerful, often invisible form of speech regulation that can shape public opinion more effectively than overt censorship.

Reflection Questions:

  • What information sources in your life are most heavily mediated by algorithms, and how aware are you of what you might not be seeing?
  • How could you adjust your media habits to reduce your reliance on feeds that are curated by a small number of companies or individuals?
  • What is one concrete step you could take this week to diversify where and how you get news or perspectives on important issues?
2

Platform "vibes" and emotional tones are increasingly curated, which means choosing a platform is also choosing a kind of emotional and ideological environment.

Reflection Questions:

  • When you log onto different platforms, what emotions do you typically feel, and how might that be influencing your worldview?
  • How could you become more intentional about which online environments you enter when you are in particular moods or facing important decisions?
  • What platform or digital space could you reduce or replace because its emotional atmosphere is not aligned with the person you want to be?
3

Treating social platforms as neutral public squares is misleading; they operate more like privately programmed broadcasters, so scrutiny should focus on their aggregation and amplification choices, not just individual posts.

Reflection Questions:

  • In what ways have you implicitly assumed that social media feeds are organic reflections of public opinion rather than curated broadcasts?
  • How might your evaluation of a platform change if you analyzed who ultimately decides what gets amplified there and why?
  • What is one way you could factor a platform's ownership and incentive structure into your decisions about where to publish or consume content?
4

Automated, large-scale content moderation tends to narrow the range of acceptable voices and ideas, so preserving a healthy discourse requires actively seeking out edges and minority perspectives.

Reflection Questions:

  • What kinds of viewpoints or communities are least visible in your current information diet, and why might that be?
  • How could you deliberately expose yourself to thoughtful versions of perspectives that are usually downplayed or ignored in mainstream feeds?
  • What specific practice (such as following certain newsletters, forums, or authors) could you adopt to counterbalance the homogenizing effects of algorithmic moderation?
5

For-profit platforms will ultimately align moderation and design choices with their own incentives, which means relying on them to safeguard democratic discourse without external checks is risky.

Reflection Questions:

  • Where in your life are you assuming that a profit-driven organization will act in the public interest, and is that assumption justified?
  • How might recognizing platform incentives change the way you interpret political or social debates that unfold online?
  • What is one action you could take, individually or with others, to support alternative institutions or mechanisms that are less tied to platform profit motives?

Episode Summary - Notes by Micah

Content Warning
0:00 0:00