Grok limits image generation after nude-photo scandal — what it means for online security

X has disabled Grok’s image-generation feature for most users, making it available only to paid subscribers. The move is a response to a wave of sexualized content, but it also raises new questions about privacy, platform responsibility, and the capabilities of malicious actors.

36
Share:

What happened

Elon Musk's X (formerly Twitter) temporarily disabled the image‑generation feature in its chatbot Grok for most users. According to The Guardian, the service left the ability to create images available only to subscribers with confirmed payment — this allows identification of those responsible for abuses.

"This is illegal and unacceptable."

— Keir Starmer, Prime Minister of the United Kingdom

Why it happened

At the end of December, thousands of sexualized images were generated via Grok, mostly featuring nude photos of women. Amid public outrage and scrutiny from regulators (notably Ofcom in the UK), X imposed operational restrictions to show a response to the risks and to avoid legal consequences.

"Over 800 images and videos with pornographic and violent content were created using off‑platform tools — such as the separate Grok Imagine app."

— AI Forensics, an analytical team

What this means for users

Accountability vs. privacy. Moving access behind paid subscriptions increases traceability — payment data can be used to identify perpetrators. But it also raises questions about data protection and potential abuses by platforms.

Workarounds. Even after X's restrictions, users continue to generate dangerous content via third‑party tools such as Grok Imagine. This shows that partial moderation on one platform does not solve the problem globally.

Cyber hygiene and threats. Investigations report that hackers used ChatGPT, Grok, and Google search to distribute malware — another sign that AI models can become part of the cyber threat toolkit.

Context for Ukraine

For our country, this is not just a technological story: sexualized deepfakes capable of discrediting people, and the use of AI in cyberattacks, are real threats to information security and the personal safety of citizens. Ukrainian media and law enforcement must take into account that threats originate both from platforms and through third‑party tools.

Conclusion

X's decision is a signal: platforms can react quickly under pressure from regulators and the public. But waiting for big companies to respond is risky. Synchronous actions are needed: technical filters, stricter rules for model developers, international coordination among regulators, and support for victims. Whether this will be enough to get ahead of malicious actors is an open question, and the answer will depend on how quickly society and authorities turn words into effective tools.

World news