YouTube quietly changed how it handles videos that might break its rules. The changes started in December 2024, but the company didn’t tell the public about them until recently.
What Changed
YouTube used to remove videos if just 25% of the content violated its community guidelines. Now, videos can stay up even if 50% of the content breaks the rules – as long as the video is considered “in the public interest.”
The video platform now tells its moderators to think twice before removing videos. If they’re unsure whether to take down a video, they should ask their managers instead of removing it right away.
What Counts as Public Interest
YouTube considers these topics to be in the public interest:
- Elections and politics
- Race, gender, and sexuality discussions
- Immigration debates
- Abortion topics
- Censorship issues
- Social movements and ideologies
Videos about these subjects now get more protection from being removed, even if they contain harmful content.
Real Examples of the New Policy
YouTube showed its staff real examples of how the new rules work:
Medical Misinformation Video: A video titled “RFK Jr. Delivers SLEDGEHAMMER Blows to Gene-Altering JABS” falsely claimed COVID vaccines change people’s genes. YouTube kept it up because they said public interest “outweighs the harm risk.” The video has since been removed for unclear reasons.
Hate Speech Video: A 43-minute video about Trump’s cabinet picks contained a slur against a transgender person. YouTube left it up because it only had one violation in the entire video.
Violence Discussion: A South Korean video showed commentators discussing putting former president Yoon Suk Yeol in a guillotine. YouTube kept it because they said “execution by guillotine is not feasible.”
Why YouTube Made These Changes
The changes come after years of criticism from Republicans, including former President Trump, who accused tech companies of censoring conservative voices.
YouTube isn’t alone in loosening content rules. Meta (Facebook and Instagram) ended its fact-checking program in January 2025. X (formerly Twitter) stopped fact-checking when Elon Musk bought it in 2022.
Impact on Creators and Users
The changes help political commentators and podcasters whose long videos mix news with opinions. It also protects content like city council meetings or campaign rallies that might contain some rule-breaking material.
However, critics worry this could lead to more harmful misinformation and hate speech spreading on the platform.
YouTube’s Response
YouTube spokesperson Nicole Bell said the company regularly updates its guidelines to match the content creators make today. She stressed that these exceptions only apply to “a small fraction” of videos on YouTube.
“Our goal remains the same: to protect free expression on YouTube while mitigating egregious harm,” Bell said in a statement.
The Numbers
Despite the looser rules, YouTube actually removed more hateful content in early 2025. The platform took down 192,586 videos for hateful and abusive content in the first three months of 2025 – a 22% increase from the same period in 2024.
What This Means Going Forward
The policy shift reflects a broader trend among social media companies pulling back from content moderation. This follows political pressure and legal challenges facing tech companies.
For users, this means they’re likely to see more controversial content that previously would have been removed. For creators, it offers more protection for political and social commentary, but also potentially more exposure to harmful content.
The long-term effects of these changes remain to be seen as YouTube balances free speech concerns with user safety.