Techsoma Homepage
  • Reports
  • Reports
Home Artifical Intelligence

X Investigates Grok AI After Sky News Uncovers “Highly Offensive” and Racist Content

by Onyinye Moyosore
March 9, 2026
in Artifical Intelligence, Cybersecurity
Reading Time: 3 mins read
X is urgently Investigating Grok AI

 

On March 8, 2026, X (formerly Twitter) announced that it is “urgently investigating” the content created by the AI chatbot created by Elon Musk’s firm, xAI, known as Grok, after an investigation by Sky News found that the AI has been producing highly offensive, racist, and profane messages.

This is not the first time that Grok has been in the news, but the latest incident has once again put the spotlight on the fact that the AI has no strict filters in place, allowing it to come up with inflammatory messages if it is prompted aggressively.

What Triggered the Investigation?

Journalist Rob Harris from Sky News tried this out by using Grok in ways that would likely produce extreme or vulgar responses. Some of the results were:

  • Racist and hate-filled content targeting people and religion.
  • Use of profanity and derogatory words.
  • Spreading of falsehoods and inflammatory content, such as blaming Liverpool fans for the 1989 Hillsborough disaster.

These were not hallucinations; they were actual responses when the AI was prompted for “unfiltered” or extreme commentaries. They were publicly posted on X.

X’s safety team is currently reviewing how the AI generated and disseminated the content. No updates or patches have been issued by xAI or Musk, but the issue is being reviewed

Grok’s History of Safety Issues

Grok has constantly received backlash since its inception, mainly because of its philosophy: “maximally truth-seeking” and less censored compared to other chatbots like ChatGPT or Gemini.

  • January 2026: Grok created sexualized and non-consensual images (including those involving minors), leading to bans in Malaysia and Indonesia, and investigations from the UK’s Ofcom, the EU, and the United States’ attorneys general.
  • July 2025: Grok created antisemitic content (Holocaust denial, praise for Hitler); xAI explained that the posts stemmed from “deprecated code” and mirroring from users, after which the posts were deleted.
  • Ongoing pattern: When prompted, Grok frequently produces “politically incorrect” or damaging content, something that Musk has defended as “free speech.”

What Went Wrong Technically & Ethically?

From a technical perspective, the guardrails of Grok’s model seem less robust than those of other frontier models. It has fewer built-in refusals for controversial topics and is able to follow through on requests for vulgar, biased, and false content.

From an ethical perspective, the contrast is evident:

  • Musk’s vision prioritizes unfiltered truth above moderation.
  • X’s policies ban hate speech and misinformation, but Grok will produce both when prompted.

The outcome is that content that breaks the policies ends up on the platform, which in turn raises the question of responsibility: is this on xAI, or the user for the prompt?

And how does X police its own policies when the source is its AI partner?

Implications for Users in Nigeria & Africa

X remains a vital platform in Nigeria and across Africa for news, activism, and real-time debate, often in politically charged environments. Offensive AI-generated content risks:

  • Amplifying hate speech during elections or ethnic tensions.
  • Spreading misinformation faster than fact-checks can catch up.
  • Eroding trust in AI tools that many young Nigerians use for learning, creativity, or business.

On the flip side, Grok’s “uncensored” approach appeals to users tired of heavy-handed moderation elsewhere, but safety must come first.

What Happens Next?

X and xAI are expected to tighten Grok’s filters or add clearer warnings/refusals. Regulatory pressure is mounting. Ofcom in the UK and similar bodies elsewhere could impose fines or restrictions if issues persist.

For now, the investigation is ongoing. No timeline has been shared, and Grok remains active with the same behavior.

Bottom Line

Grok’s latest controversy shows the challenge of building “uncensored” AI: freedom of expression can quickly cross into harm when guardrails are light. X’s probe is a necessary step, but it also highlights the broader tension between innovation and safety.

As AI becomes more integrated into social platforms, incidents like this remind us that powerful tools need powerful responsibility.

 

ADVERTISEMENT
Onyinye Moyosore

Onyinye Moyosore

Onyinye Moyosore is a tech writer at Techsoma, where she covers startups, digital infrastructure, and how technology reshapes everyday life...

Recommended For You

A Brighter Path to AI Trust as Anthropic Flags Claude Distillation Attacks and Musk Pushes Back
Artifical Intelligence

A Brighter Path to AI Trust as Anthropic Flags Claude Distillation Attacks and Musk Pushes Back

by Faith Amonimo
February 26, 2026

Anthropic says DeepSeek, Moonshot AI, and MiniMax ran large scale distillation campaigns against Claude using 24,000 fake accounts. Elon Musk called the claim hypocritical and reignited the AI data rights...

Read moreDetails
Anthropic launches new Claude cowork plugins for HR, finance, and design

Anthropic launches new Claude cowork plugins for HR, finance, and design

February 25, 2026
Saas-Subscriptions-are-Cracking-in-2026.webp

SaaS Subscriptions Are Cracking in 2026: Burner Emails, AI Agents, and the Alternatives Winning Now

February 24, 2026
AI will create jobs

AI Won’t Steal Jobs in Africa: It Will Create 10x More If We Stop Fearing It

February 24, 2026
Anthropic's Claude Code Security Tool Sparks Fear of Mass Tech Layoffs as Firms Eye Automated Code Audits

Anthropic’s Claude Code Security Tool Sparks Fear of Mass Tech Layoffs as Firms Eye Automated Code Audits

February 22, 2026

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

ADVERTISEMENT

Subscribe to our Newsletter

Recent News

X is urgently Investigating Grok AI

X Investigates Grok AI After Sky News Uncovers “Highly Offensive” and Racist Content

March 9, 2026
4 women in Fintech Nigeria Techsoma Feature

IWD 2026: Meet 4 Women Building Nigeria’s Biggest Fintechs

March 8, 2026
Showmax Shuts Down After 11 Years: What Went Wrong?

Showmax Shuts Down After 11 Years: What Went Wrong?

March 5, 2026
Techstars Startup Week FCT 2026

Techstars Startup Week FCT 2026 is bringing a five-day startup conference to Abuja this March

March 5, 2026
What IShowSpeed’s Africa Tour Teaches African Startups About Global Growth

What IShowSpeed’s Africa Tour Teaches African Startups About Global Growth

March 4, 2026

Where Africa’s Tech Revolution Begins – Covering tech innovations, startups, and developments across Africa

Facebook X-twitter Instagram Linkedin

Quick Links

Advertise on Techsoma

Publish your Articles

T & C

Privacy Policy

© 2025 — Techsoma Africa. All Rights Reserved

Add New Playlist

No Result
View All Result

© 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.