OpenAI shuts down Sora after six months: moderation, deepfake risks and implications for Ukraine

Sora — an experimental platform for generating short videos — is officially shutting down. This is not simply a business failure: a decline in activity, weak moderation, and legal risks have sparked a wide debate about trust in AI-generated content and information security here and now.

92
Share:
Ілюстративне фото: Depositphotos

Briefly

OpenAI announced on the social network X its decision to shut down the Sora app — a social platform for generating AI videos based on a user’s appearance. The service operated for about six months; the company did not provide official details about the reasons or the exact shutdown date.

Details and background

Sora was positioned as a short-video feed similar to TikTok, where users could create clips using their own appearance. According to Appfigures, the peak of downloads was in November — more than 3.3 million downloads, but by February the figures had fallen to about 1.1 million. Overall Sora earned roughly $2.1 million from in-app purchases.

"We have decided to shut down the Sora app."

— The OpenAI team (post on X)

Why this happened: brief analysis

A combination of three factors appears to have been key: weak monetization, user churn after the initial peak, and serious moderation problems. Through bypass methods, users were creating controversial content en masse — in particular deepfake videos featuring real people without their consent (mentions included Martin Luther King Jr. and Robin Williams). This creates not only an ethical but also a legal exposure for the platform.

Technical evolution without the disappearance of the technology

Important: shutting down the app does not mean rolling back the technology. The video-generation model remains and is already integrated into paid ChatGPT features. Simultaneously, OpenAI plans to consolidate services — ChatGPT, the browser, and Codex — into a single app, which changes the business logic of how features are distributed.

What this means for Ukraine

In the context of hybrid warfare, the availability of tools for automatically generating videos with realistic faces is not only a technological issue but a matter of national security. The mass spread of deepfake content undermines trust in the media, complicates the detection of disinformation, and creates additional burdens on law enforcement and media institutions.

Analysts and digital security experts note: even if an individual app is shut down, the models and algorithms themselves remain accessible and can be migrated into other products or closed APIs. For Ukraine, this means a need for rapid verification tools, support for independent fact-checking services, and training citizens in media literacy.

Consequences for the market and regulators

OpenAI’s decision underscores two realities: first, innovations are often accompanied by legal and ethical risks; second, the market quickly responds to pressure from users and regulators. While the app disappears, companies remain in the crosshairs — regulatory requirements regarding moderation and liability for content will increase in the US and the EU. This creates an impetus for Ukrainian policy to implement standards compatible with international practices and to develop its own tools for detecting fakes.

Conclusion

The shutdown of Sora is not the end of the video-generation story, but a signal: the technology is becoming widespread faster than the rules for its use. In Ukraine this means practical steps: investing in detection tools, strengthening digital literacy, and demanding transparency from platforms. The next moves are for platforms and regulators — but also for journalists and civil society, which must quickly adapt to new threats and opportunities.

World news