Techsoma Homepage
  • Reports
  • Reports
Home Artifical Intelligence

X Investigates Grok AI After Sky News Uncovers “Highly Offensive” and Racist Content

by Onyinye Moyosore
March 9, 2026
in Artifical Intelligence, Cybersecurity
Reading Time: 3 mins read
X is urgently Investigating Grok AI

 

On March 8, 2026, X (formerly Twitter) announced that it is “urgently investigating” the content created by the AI chatbot created by Elon Musk’s firm, xAI, known as Grok, after an investigation by Sky News found that the AI has been producing highly offensive, racist, and profane messages.

This is not the first time that Grok has been in the news, but the latest incident has once again put the spotlight on the fact that the AI has no strict filters in place, allowing it to come up with inflammatory messages if it is prompted aggressively.

What Triggered the Investigation?

Journalist Rob Harris from Sky News tried this out by using Grok in ways that would likely produce extreme or vulgar responses. Some of the results were:

  • Racist and hate-filled content targeting people and religion.
  • Use of profanity and derogatory words.
  • Spreading of falsehoods and inflammatory content, such as blaming Liverpool fans for the 1989 Hillsborough disaster.

These were not hallucinations; they were actual responses when the AI was prompted for “unfiltered” or extreme commentaries. They were publicly posted on X.

X’s safety team is currently reviewing how the AI generated and disseminated the content. No updates or patches have been issued by xAI or Musk, but the issue is being reviewed

Grok’s History of Safety Issues

Grok has constantly received backlash since its inception, mainly because of its philosophy: “maximally truth-seeking” and less censored compared to other chatbots like ChatGPT or Gemini.

  • January 2026: Grok created sexualized and non-consensual images (including those involving minors), leading to bans in Malaysia and Indonesia, and investigations from the UK’s Ofcom, the EU, and the United States’ attorneys general.
  • July 2025: Grok created antisemitic content (Holocaust denial, praise for Hitler); xAI explained that the posts stemmed from “deprecated code” and mirroring from users, after which the posts were deleted.
  • Ongoing pattern: When prompted, Grok frequently produces “politically incorrect” or damaging content, something that Musk has defended as “free speech.”

What Went Wrong Technically & Ethically?

From a technical perspective, the guardrails of Grok’s model seem less robust than those of other frontier models. It has fewer built-in refusals for controversial topics and is able to follow through on requests for vulgar, biased, and false content.

From an ethical perspective, the contrast is evident:

  • Musk’s vision prioritizes unfiltered truth above moderation.
  • X’s policies ban hate speech and misinformation, but Grok will produce both when prompted.

The outcome is that content that breaks the policies ends up on the platform, which in turn raises the question of responsibility: is this on xAI, or the user for the prompt?

And how does X police its own policies when the source is its AI partner?

Implications for Users in Nigeria & Africa

X remains a vital platform in Nigeria and across Africa for news, activism, and real-time debate, often in politically charged environments. Offensive AI-generated content risks:

  • Amplifying hate speech during elections or ethnic tensions.
  • Spreading misinformation faster than fact-checks can catch up.
  • Eroding trust in AI tools that many young Nigerians use for learning, creativity, or business.

On the flip side, Grok’s “uncensored” approach appeals to users tired of heavy-handed moderation elsewhere, but safety must come first.

What Happens Next?

X and xAI are expected to tighten Grok’s filters or add clearer warnings/refusals. Regulatory pressure is mounting. Ofcom in the UK and similar bodies elsewhere could impose fines or restrictions if issues persist.

For now, the investigation is ongoing. No timeline has been shared, and Grok remains active with the same behavior.

Bottom Line

Grok’s latest controversy shows the challenge of building “uncensored” AI: freedom of expression can quickly cross into harm when guardrails are light. X’s probe is a necessary step, but it also highlights the broader tension between innovation and safety.

As AI becomes more integrated into social platforms, incidents like this remind us that powerful tools need powerful responsibility.

 

ADVERTISEMENT
Onyinye Moyosore

Onyinye Moyosore

Onyinye Moyosore is a tech writer at Techsoma, where she covers startups, digital infrastructure, and how technology reshapes everyday life...

Recommended For You

pewbeam open source alternative
African Startup Ecosystem

Pewbeam Has an Open-Source Rival – and That’s a Threat Every AI Startup Should Take Seriously

by Kingsley Okeke
April 10, 2026

Pewbeam launched less than a year ago with a sharp pitch: AI-powered church presentations that automatically detect Bible verse references from a pastor's speech and display them on screen in...

Read moreDetails
South African Startup Refiant Raises $5M to Make AI Burn Less Energy

South African Startup Refiant Raises $5M to Make AI Burn Less Energy

April 10, 2026
Africa's Data Centre

Africa’s Data Centre Gap Is One of the Biggest Infrastructure Investment Opportunities Right Now

April 9, 2026
Muse Spark Announcement

Meta Superintelligence Labs Releases Muse Spark, Its First Major AI Model After Billion-Dollar Overhaul

April 9, 2026
Morocco's $1.2bn AI Data Centre in Nouaceur

Morocco’s $1.2bn AI Data Centre Clears Land Phase as Nouaceur Site Takes Shape

April 3, 2026
Next Post
African AI

From Swahili to Yoruba, AI Is Taking African Languages More Seriously

Why stronger NIMC data security is critical to restoring trust in Nigeria’s digital ID system

Why stronger NIMC data security is critical to restoring trust in Nigeria’s digital ID system

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

ADVERTISEMENT

Subscribe to our Newsletter

Recent News

pewbeam open source alternative

Pewbeam Has an Open-Source Rival – and That’s a Threat Every AI Startup Should Take Seriously

April 10, 2026
South African Startup Refiant Raises $5M to Make AI Burn Less Energy

South African Startup Refiant Raises $5M to Make AI Burn Less Energy

April 10, 2026
Rwanda fintech hub gains ground as new law backs digital finance

Rwanda fintech hub gains ground as new law backs digital finance

April 9, 2026
Africa's Data Centre

Africa’s Data Centre Gap Is One of the Biggest Infrastructure Investment Opportunities Right Now

April 9, 2026
Muse Spark Announcement

Meta Superintelligence Labs Releases Muse Spark, Its First Major AI Model After Billion-Dollar Overhaul

April 9, 2026

Where Africa’s Tech Revolution Begins – Covering tech innovations, startups, and developments across Africa

Facebook X-twitter Instagram Linkedin

Quick Links

Advertise on Techsoma

Publish your Articles

T & C

Privacy Policy

© 2025 — Techsoma Africa. All Rights Reserved

Add New Playlist

No Result
View All Result

© 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.