Techsoma Homepage
  • Policy & Regulations
  • Artificial Intelligence
  • Reports
  • Policy & Regulations
  • Artificial Intelligence
  • Reports
Home Artificial Intelligence

X Investigates Grok AI After Sky News Uncovers “Highly Offensive” and Racist Content

by Onyinye Moyosore
March 9, 2026
in Artificial Intelligence, Cybersecurity
Reading Time: 3 mins read
X is urgently Investigating Grok AI

 

On March 8, 2026, X (formerly Twitter) announced that it is “urgently investigating” the content created by the AI chatbot created by Elon Musk’s firm, xAI, known as Grok, after an investigation by Sky News found that the AI has been producing highly offensive, racist, and profane messages.

This is not the first time that Grok has been in the news, but the latest incident has once again put the spotlight on the fact that the AI has no strict filters in place, allowing it to come up with inflammatory messages if it is prompted aggressively.

What Triggered the Investigation?

Journalist Rob Harris from Sky News tried this out by using Grok in ways that would likely produce extreme or vulgar responses. Some of the results were:

  • Racist and hate-filled content targeting people and religion.
  • Use of profanity and derogatory words.
  • Spreading of falsehoods and inflammatory content, such as blaming Liverpool fans for the 1989 Hillsborough disaster.

These were not hallucinations; they were actual responses when the AI was prompted for “unfiltered” or extreme commentaries. They were publicly posted on X.

X’s safety team is currently reviewing how the AI generated and disseminated the content. No updates or patches have been issued by xAI or Musk, but the issue is being reviewed

Grok’s History of Safety Issues

Grok has constantly received backlash since its inception, mainly because of its philosophy: “maximally truth-seeking” and less censored compared to other chatbots like ChatGPT or Gemini.

  • January 2026: Grok created sexualized and non-consensual images (including those involving minors), leading to bans in Malaysia and Indonesia, and investigations from the UK’s Ofcom, the EU, and the United States’ attorneys general.
  • July 2025: Grok created antisemitic content (Holocaust denial, praise for Hitler); xAI explained that the posts stemmed from “deprecated code” and mirroring from users, after which the posts were deleted.
  • Ongoing pattern: When prompted, Grok frequently produces “politically incorrect” or damaging content, something that Musk has defended as “free speech.”

What Went Wrong Technically & Ethically?

From a technical perspective, the guardrails of Grok’s model seem less robust than those of other frontier models. It has fewer built-in refusals for controversial topics and is able to follow through on requests for vulgar, biased, and false content.

From an ethical perspective, the contrast is evident:

  • Musk’s vision prioritizes unfiltered truth above moderation.
  • X’s policies ban hate speech and misinformation, but Grok will produce both when prompted.

The outcome is that content that breaks the policies ends up on the platform, which in turn raises the question of responsibility: is this on xAI, or the user for the prompt?

And how does X police its own policies when the source is its AI partner?

Implications for Users in Nigeria & Africa

X remains a vital platform in Nigeria and across Africa for news, activism, and real-time debate, often in politically charged environments. Offensive AI-generated content risks:

  • Amplifying hate speech during elections or ethnic tensions.
  • Spreading misinformation faster than fact-checks can catch up.
  • Eroding trust in AI tools that many young Nigerians use for learning, creativity, or business.

On the flip side, Grok’s “uncensored” approach appeals to users tired of heavy-handed moderation elsewhere, but safety must come first.

What Happens Next?

X and xAI are expected to tighten Grok’s filters or add clearer warnings/refusals. Regulatory pressure is mounting. Ofcom in the UK and similar bodies elsewhere could impose fines or restrictions if issues persist.

For now, the investigation is ongoing. No timeline has been shared, and Grok remains active with the same behavior.

Bottom Line

Grok’s latest controversy shows the challenge of building “uncensored” AI: freedom of expression can quickly cross into harm when guardrails are light. X’s probe is a necessary step, but it also highlights the broader tension between innovation and safety.

As AI becomes more integrated into social platforms, incidents like this remind us that powerful tools need powerful responsibility.

 

Onyinye Moyosore

Onyinye Moyosore

Onyinye Moyosore is a tech writer at Techsoma, where she covers startups, digital infrastructure, and how technology reshapes everyday life...

Recommended For You

CommonLingua launch
Artificial Intelligence

GSMA and Pleias Launch CommonLingua to Fix AI’s African Language Problem

by Kingsley Okeke
April 29, 2026

French AI research company Pleias and the GSMA have released CommonLingua, a language identification (LID) model that covers 334 languages, including 61 African languages, and is designed to address a...

Read moreDetails
ai-layoffs-in-tech-real-reason-behind-the-cuts

The Real Story Behind Job Layoffs and Why Your Skills Still Matter

April 28, 2026
Data Protection

What Nigeria’s Data Protection Law Actually Means for Everyday Internet Users

April 28, 2026
Elon Musk OpenAI lawsuit

Elon Musk vs. OpenAI: The Trial That Could Redefine the Future of Artificial Intelligence

April 27, 2026
Mozambique cyber laws illustration showing secure digital networks, online safety, and Africa-wide tech policy growth

Mozambique Cyber Laws and Africa’s Path to Safer Digital Growth

April 22, 2026
Next Post
African AI

From Swahili to Yoruba, AI Is Taking African Languages More Seriously

NIMC data security

Why stronger NIMC data security is critical to restoring trust in Nigeria’s digital ID system

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter

Recent News

Idorenyin Obong, CEO of Grey

Grey Fintech Gains Regulatory Approval in Canada Under Federal Payments Framework

April 30, 2026
Techsoma Africa

Nigerian Telcos Push for Dig-Once Policy to Rescue ₦3 Trillion Fibre Rollout

April 30, 2026
SuperteamNG

Nigeria Leads Africa’s Solana Developer Surge as SuperteamNG Pumps $162,000 into Q1 Ecosystem

April 29, 2026
CommonLingua launch

GSMA and Pleias Launch CommonLingua to Fix AI’s African Language Problem

April 29, 2026
MTN shareholders pressure ahead of AGM

MTN Executive Pay Faces Shareholder Pushback Ahead of May Annual General Meeting

April 29, 2026
Techsoma Africa

Techsoma Africa reports on startups, fintech, AI, digital policy, and the builders shaping Africa’s innovation economy.

Facebook X-twitter Instagram Linkedin

Company

About

Contact

Advertise

Site Map

Coverage

Startups

Fintech

Artificial Intelligence

Reports

Resources

Privacy Policy

RSS Feed

News Sitemap

Policy & Regulations

Copyright 2026 Techsoma Africa. All rights reserved.

No Result
View All Result
  • Reports
  • Policy & Regulations
  • Artificial Intelligence
  • About
  • Contact
  • Advertise

Copyright 2026 Techsoma Africa. All rights reserved.